Conjunction fallacy and probabilistic risk assessment.

post by Dmytry · 2012-03-08T15:07:13.934Z · LW · GW · Legacy · 10 comments

Contents

  Summary:
  Details:
None
10 comments

Summary:

There is a very dangerous way in which conjunction fallacy can be exploited. One can present you with 2..5 detailed, very plausible failure scenarios whose probabilities are shown to be very low, using solid mathematics; then if you suffer from conjunction fallacy, it will look like this implies high safety of a design - while in fact it's the detailedness of the scenario that makes probability so low.

Even if you realize that there may be many other scenarios that were not presented to you, you still have an incredibly low probability number on a highly plausible ("most likely") failure scenario, which you, being unaware of the powers of conjunction, attribute to safety of the design.

The conjunction fallacy can be viewed as poor understanding of relation between plausibility and probability. Addition of extra details doesn't make scenario seem less plausible (it can even increase plausibility), but does mathematically make it less probable.

Details:

What happens if a risk assessment is being prepared for (and possibly by) sufferers of conjunction fallacy?

Detailed example scenarios will be chosen, such as:

A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.

Then as a risk estimate, you multiply probability of Russian invasion of Poland, by probability of it resulting in suspension of diplomatic relations between US and SU, and multiply by probability of it happening specifically in 1983 . The resulting probability could be extremely small for sufficiently detailed scenario (you can add the polish prime minister being assassinated if your probability is still too high for comfort).

To a sufferer of conjunction fallacy it looks like a very plausible, 'most likely' scenario has been shown highly improbable, and thus the risks are low. The sufferer of conjunction fallacy does not expect that this probability could be very low in unsafe design.

It seems to me that the risk assessment is routinely done in such a fashion. Consider Space Shuttle's reliability, or the NRC cost-benefit analyses for the spent fuel pools , which goes as low as one in 45 millions years for the most severe scenario. (Same seem to happen in all of the NRC resolutions, to varying extent; feel free to dig through)

Those reports looked outright insane to me - a very small number of highly detailed scenarios are shown to be extremely improbable - how in the world would anyone think that this implies safety? How in the world can anyone take seriously one in 45 million years scenario? That's near the point where a meteorite impact leads to social disorder that leads to the fuel pool running dry!

I couldn't understand that. Detailed scenarios are inherently unlikely to happen whenever the design is safe or not; their unlikehood is a property of their detailedness, not of safety or unsafety of design.

Until it clicked that if you read those through the goggles of conjunction fallacy, it is what looks like the most likely failure modes that are shown to be incredibly improbable. Previously (before reading lesswrong) I didn't really understand how exactly anyone buys into this sort of stuff, and could find no way to even argue. You can't quite talk someone out of something when you don't understand how they believe in it. You say "there may be many scenarios that were not considered", and they know that already.

This is one seriously dangerous way in which conjunction fallacy can be exploited. It seems to be rather common in risk analysis.

Note: I do think that the conjunction fallacy is responsible for much of the credibility given to such risk estimates; no-one seem to seriously believe that NRC always covers all the possible scenarios, yet at same time there seem to be a significant misunderstanding of the magnitude of the problem; the NRC risk estimates are taken as within the ballpark of the correct value in the cost-benefit analysis for the safety features. For nuclear power, widespread promotion of results of such analyses results in massive loss of public trust once an accident happens, and consequently to narrowing of available options and transition to less desirable energy sources (coal in particular), which in itself is a massive dis-utility.

[The other issue in linked NRC study is of course that the cost-benefit analysis had used internal probability when it should have used external probability.]

edit: minor clarifying

edits: improved the abstract and clarified the article further based on the comments.

10 comments

Comments sorted by top scores.

comment by [deleted] · 2012-03-08T17:09:58.738Z · LW(p) · GW(p)

Cool, thanks for pointing that out. The field dealing with this sort of thing is called "reliability engineering", and it is apperently not widely studied.

The correct way to estimate failure rates and such is to draw a directed acyclic graph of all the components in the system so that there is at least one path from "not working" to "working". If the failure probability of each component is known, then you do the big probability join over all possible detailed scenarios for success.

Usually, the graph degenerates to one big conjunction, but if you have redundant components or systems, it can be more complex.

comment by Viliam_Bur · 2012-03-08T16:52:40.428Z · LW(p) · GW(p)

"Conjunction fallacy and probabilistic risk assessment" is enough for a title; you can move the applause lights into the article.

Now, what exactly is that fallacy in the risk assessment? Seems like it is thinking that there is only one very precise path how things could go wrong and then (perhaps correctly) calculating the probability of that one very precise path and reporting the result as the total risk: "This is the only way things could fail, and it's probability is low, so everything is safe."

EDIT: After reading the articles, there are many different fallacies combined, many of them probably caused by motivated cognition -- we want the project to succeed, and we want it now.

Replies from: Dmytry
comment by Dmytry · 2012-03-08T19:50:29.976Z · LW(p) · GW(p)

Now, what exactly is that fallacy in the risk assessment? Seems like it is thinking that there is only one very precise path how things could go wrong and then (perhaps correctly) calculating the probability of that one very precise path and reporting the result as the total risk: "This is the only way things could fail, and it's probability is low, so everything is safe."

That's what I thought too, but the people who buy into this sort of assessment, they do acknowledge that it can be a bit off and that there could be other scenarios. They still think that the overall risk is somewhere in the few per million years range, and I never could quite get why. Now my theory is that due to the conjunction fallacy they see those highly detailed plausible paths to failure as the likely way it would fail, and then if the likely way it could fail is so unlikely - then it is safe. They don't expect that this path to failure may be extremely unlikely in a very unsafe design.

comment by CarlShulman · 2012-03-08T15:41:30.279Z · LW(p) · GW(p)

See Yvain's post on this.

comment by Giles · 2012-03-08T22:37:51.035Z · LW(p) · GW(p)

Yeah. It never would have occurred to me to file this under "conjunction fallacy" but you're right - the conjunction fallacy seems to come into play when you try to rank probabilities but presumably melts away somewhat when you try to put hard numbers to them.

So there's a two stage process here - firstly the ranking stage: "what are the most likely disasters you can think of?" which is flawed. Then there's the quantitative stage which is more reliable. Combine them and you get flawed * reliable = flawed.

This could also be viewed in terms of the availability heuristic - the risk assessment team writes down the most available risks (which turn out to be detailed scenarios) and then stop when they feel they've done enough work. It can also obviously be viewed as a special case of Inside View.

A possible workaround would be to list a bunch of risks and estimate the probability of each. If the highest is 0.002 then say "can we think of any (possibly less detailed) risk scenarios with a probability similar or greater than 0.002?" Repeat this process until the answer is no.

Replies from: Dmytry
comment by Dmytry · 2012-03-09T07:37:21.439Z · LW(p) · GW(p)

The problem is that this method, by it's very nature, works only on highly detailed scenarios.

I think the method's domain needs to be refined. The method can, sometimes, demonstrate a deficiency in design. It pretty much can't demonstrate absence of deficiency. It is a very strong result when something fails PRA - and it is a very weak result when something passes.

Thus the method absolutely can't be used the way NRC uses it.

On top of that there may be good effort==good results fallacy happening as well. It just doesn't fit into people's heads that someone will be doing some 'annoying math' entirely meaninglessly. And the educational institutions, as well as the society, tends to reward efforts rather than results. People don't think that lower apparent effort can at times result in better results.

With regards to the list of scenarios, it is very problematic. Say, I propose - what is the probability of contractor using sub-standard concrete? Okay you can't do math on it until you break it down into contractor's family starving, contractor needing money, or contractor being evil, or the like. At which point you pick out of thin air a zillion values and do some math on them, and it's signalling time - if you bothered to do 'annoying math' you must truly believe your values mean something.

This is actually sort of seem to be widespread in nuclear 'community'. On the nuclear section of physicsforums there were a couple of genuine nuclear engineering / safety kinda people who'd do all sorts of calculations about Fukushima - to much cheering of the innumerate crowd. For example calculating that there's enough of isotope A in the spent fuel to explain concentration of it in the spent fuel pool water, never mind that there is also a lot of isotope B in the spent fuel and if you manage to break enough old fuel tubes open to get enough isotope A out, you get way more (~10 000x) isotope B into the water than there is, and you got some 10 000x refinement process here to get the observed ratio, which was the same as other reactors. Not only do you need to postulate some unspecific refinement process (which might happen - those were iodine and cesium and they have different chemistry - but didn't happen beyond factor of 5 even through the food chain of the fish etc), you need it to magically compensate for difference in the source age and match the ratio seen everywhere else, which is just plain improbable by simple as day statistics. Bloody obviously that it probably just got pumped in with the contaminated cooling water, but you can't really stick a lot of mathematics onto this hypothesis and get a cheer from innumerate crowd, while you can stick a lot of mathematics onto calculation of how much isotope A is left after n days in spent fuel.

Likewise with the risk estimates, you can't stick a lot of mathematics onto something reasonable, but you can make nonsense with a lot of calculations on values picked out of thin air very easily, and signal your confidence in assumptions. Surely if you bother to do something complicated with assumptions, it isn't garbage.

tl;dr; in some fields mathematics seem to be used for signalling purposes. When you make up a value out of thin air, and present it, that is, like, just your opinion, man. When you make up a dozen values out of thin air and crunch numbers to get some value, that's immediately more trusted.

Replies from: Giles
comment by Giles · 2012-03-09T19:51:11.613Z · LW(p) · GW(p)

I think I agree with everything here. Would it be fair to summarize this as:

  • Proposals such as mine won't do any good, because this is fundamentally a cultural problem not a methodological one
  • People know what "math" looks like but they don't understand Bayes (in your isotope example)
Replies from: Dmytry
comment by Dmytry · 2012-03-09T20:23:33.281Z · LW(p) · GW(p)

Yea... well with math in general, you can quite effectively mislead people by computing some one out of context value which grossly contradicts their fallacious reasoning. Then the fallacious reasoning is still present and still doing strong, and something else gives in to explain that number.

comment by Kaj_Sotala · 2012-03-08T18:31:30.577Z · LW(p) · GW(p)

I think this should be on the front page.

Replies from: Randaly
comment by Randaly · 2012-03-08T20:09:41.472Z · LW(p) · GW(p)

Agreed.