What we could learn from the frequency of near-misses in the field of global risks (Happy Bassett-Bordne day!)

post by turchin · 2015-10-28T18:28:51.613Z · LW · GW · Legacy · 9 comments

Contents

9 comments

I wrote an article how we could use such data in order to estimate cumulative probability of the nuclear war up to now.

TL;DR: from other domains we know that frequency of close calls is around 100:1 to actual events. If approximate it on nuclear war and assume that there were much more near misses than we know, we could conclude that probability of nuclear war was very high and we live in improbable world there it didn't happen.

 

Yesterday 27 October was Arkhipov day in memory of the man who prevented nuclear war. Today 28 October is Bordne and Bassett day in memory of Americans who prevented another near-war event. Bassett was the man who did most of the work of preventing launch based false attack code, and Bordne made the story public.

The history of the Cold War shows us that there were many occasions when the world stood on the brink of disaster. The most famous of them being the cases of Petrov , Arkhipov  and the recently opened Bordne case in Okinawa 

I know of over ten, but less than a hundred similar cases of varying degrees of reliability. Other global catastrophic risk near-misses are not nuclear, but biological such as the Ebola epidemic, swine flu, bird flu, AIDS, oncoviruses and the SV-40 vaccine.

The pertinent question is whether we have survived as a result of observational selection, or whether these cases are not statistically significant.

In the Cold War era, these types of situations were quite numerous, (such as the Cuban missile crisis). However, in each case, it is difficult to say if the near-miss was actually dangerous. In some cases, the probability of disaster is subjective, that is, according to participants it was large, whereas objectively it was small. Other near-misses could be a real danger, but not be seen by operators.

We can define near-miss  of the first type as a case that meets the both following criteria:

a) safety rules have been violated

b) emergency measures were applied in order to avoid disaster (e.g. emergency breaking of a vehicle, refusal to launch nuclear missiles)

Near-miss can also be defined as an event which, according to some participants of the event, was very dangerous. Or, as an event, during which a number of factors (but not all) of a possible catastrophe coincided. 

Another type of near-miss is the miraculous salvation. This is a situation whereby a disaster was averted by a miracle, that is, it had to happen, but it did not happen because of a happy coincidence of newly emerged circumstances (for example, a bullet stuck in the gun barrel). Obviously, in the case of miraculous salvation a chance catastrophe was much higher than in near-misses of the first type, on which we will now focus. 

We may take the statistics of near-miss cases from other areas where a known correlation between the near-miss and actual event exists, for example, compare the statistics of near-misses and actual accidents with victims in transport.

Industrial research suggests that one crash accounts for 50-100 near-miss cases in different areas, and 10,000 human errors or violations of regulations. (“Gains from Getting Near Misses Reported” )

Another survey estimates 1 to 600 and another 1 to 300 and even 1 to 3000 (but in case of unplanned maintenance).

The spread of estimates from 100 to 3000 is due to the fact that we are considering different industries, and different criteria for evaluating a near-miss.

However, the average ratio of near-misses is in the hundreds, and so we can not conclude that the observed non-occurrence of nuclear war results from observational selection.

On the other hand, we can use a near-miss frequency to estimate the risk of a global catastrophe. We will use a lower estimate of 1 in 100 for the ratio of near-miss to real case, because the type of phenomena for which the level of near-miss is very high will dominate the probability landscape. (For example, if an epidemic is catastrophic in 1 to 1000 cases, and for nuclear disasters the ratio is 1 to 100, the near miss in the nuclear field will dominate).

During the Cold War there were several dozen near-misses, and several near-miss epidemics at the same time, this indicates that at the current level of technology we have about one such case a year, or perhaps more: If we analyze the press, several times a year there is some kind of situation which may lead to the global catastrophe: a threat of war between North and South Korea, an epidemic, a passage of an asteroid, a global crisis. And also many near-misses remain classified.

If the average level of safety in regard to global risks does not improve, the frequency of such cases suggests that a global catastrophe could happen in the next 50-100 years, which coincides with the estimates obtained by other means.

It is important to increase detailed reporting on such cases in the field of global risks, and learn how to make useful conclusions based on them. In addition, we need to reduce the level of near misses in the areas of global risk, by rationally and responsibly increasing the overall level of security measures.

9 comments

Comments sorted by top scores.

comment by Galap · 2015-11-02T02:11:12.199Z · LW(p) · GW(p)

I don't really buy it. The world is changing too fast. Things are way different now than they were in the 50s, so I don't think the statistics from then really mean much anymore.

In another 50 years what will the landscape look like? who knows? Maybe the diseases won't really be such a huge problem because our anivirals will become as good as our antibiotics.

The one thing that can be said with pretty high certainty is that for the most part it will be a completely different world in the second half of the 21st century.

Looking at stuff in the second half of the 20th century to predict the 21st isn't going to cut it, the same way that looking at politics and wars in the 1860s wouldn't produce any useful results about the 1960s.

Replies from: turchin
comment by turchin · 2015-11-02T11:38:43.368Z · LW(p) · GW(p)

Earlier near-misses are better known because secrecy was lifted. But such events still happen, like nuclear alert in Russia in 1995, Indian-Pakistan standoff in 2001, Ebola 2014. So the main question is if general safety and sanity lines will rise?

And they probably rise in main superpowers but we have many new nuclear countries as well as new risky technologies.

We don't know what kind of technologies will be dominating in the second half of 21 century, but more important question is what kind of safety levels will be used?

We could see that in general safety is growing in all domains: nuclear, cars, planes are safer now. But also number of users is also growing which may result in more accidents.

So near-misses may be very preliminary and rough estimate of general safety levels which is typical to humanity and thus could be used to make reasonable expectation about future risks.

It also shows that rising general safety levels in all domains may be universal instruments to prevent global catastrophes.

But also the number of "trails" is rising and it rises a possibility of even very improbable catastrophies

comment by MarsColony_in10years · 2015-10-29T05:54:02.020Z · LW(p) · GW(p)

Excellent start and setup, but I diverge from your line of thought here:

We will use a lower estimate of 1 in 100 for the ratio of near-miss to real case, because the type of phenomena for which the level of near-miss is very high will dominate the probability landscape. (For example, if an epidemic is catastrophic in 1 to 1000 cases, and for nuclear disasters the ratio is 1 to 100, the near miss in the nuclear field will dominate).

I'm not sure I buy this. We have two types of near misses (biological and nuclear). Suppose we construct some probability distribution for near-misses, ramping up around 1/100 and ramping back down at 1/1000. That's what we have to assume for any near-miss scenario, if we know nothing additional. I'll grant that if we roll the dice enough times, the 1/100 cases will start to dominate, but we only have 2 categories of near misses. That doesn't seem like enough to let us assume a 1/100 ratio of catastrophes to near misses.

Additionally, there does seem to be good reason to believe that the rate of near misses has gone down since the cold war ended. (Although if any happened, they'd likely still be classified.) That's not to say that our current low rate is a good indicator, either. I would expect our probability of catastrophe to be dominated by the probability of WWIII or another cold war.

We had 2 world wars in the first 50 years of last century, before nuclear deterrence substantially lowered the probability of a third. If that's a 10x reduction, then we can expect 0.4 a century instead of 4 a century. If there's a 100x reduction, then we might expect 0.04 world wars a century. Multiply that by the probability of nuclear winter given WWIII to get the probability of disaster.

However, I suspect that another cold war is more likely. We spent ~44 of the past 70 years in a cold war. If that's more or less standard, then on average we might expect to spend 63% of any given century in a cold war. This can give us a rough range of probabilities of armageddon:

  • 1 near miss a year spent in cold war 63 years spent in cold war per century 1 nuclear winter per 100 near misses = 63% chance of nuclear winter per century

  • 0.1 near miss a year spent in cold war 63 years spent in cold war per century 1 nuclear winter per 3000 near misses = 0.21% chance of nuclear winter per century

For the record, this range corresponds to a projected half life between roughly 1 century and ~100 centuries. That's much more broad then your 50-100 year prediction. I'm not even sure where to start to guesstimate the risk of an engineered pandemic.

Replies from: turchin
comment by turchin · 2015-10-29T09:01:15.116Z · LW(p) · GW(p)

"I'll grant that if we roll the dice enough times, the 1/100 cases will start to dominate, but we only have 2 categories of near misses. That doesn't seem like enough to let us assume a 1/100 ratio of catastrophes to near misses."

In this case total probability of near misses will be something like (1/100 +1/3000)/2 = almost 1/200. If we look into nature of cold war near misses we could see that 1/100 estimate is more probable. More research need to estimate what field is more equal to cold war. Probably it would be nuclear accidents on power stations. Large research on the topic is here.http://www-pub.iaea.org/MTCD/Publications/PDF/Pub1545_web.pdf But it doesn't give exacxt estimate of the frequency of NM, stating it as few to thousands. They define NM as chain of events and they also stated that rising security measures help to reduce NM frequency.

On my first link near-miss frequency is already aggregated: "Studies in several industries indicate that there are between 50 and 100 near misses for every accident. Also, data indicates that there are perhaps 100 erroneous acts or conditions for every near miss. This gives a total population of roughly 10,000 errors for every accident. Figure 1 illustrates the relationships between accidents, near misses and non-incidents." http://www.process-improvement-institute.com/_downloads/Gains_from_Getting_Near_Misses_Reported_website.pdf

Replies from: MarsColony_in10years
comment by MarsColony_in10years · 2015-10-29T15:39:48.528Z · LW(p) · GW(p)

Ah, thanks for the explanation. I interpreted the statement as you trying to demonstrate that number of nuclear winters / number of near misses = 1/100. You are actually asserting this instead, and using the statement to justify ignoring other categories of near misses, since the largest will dominate. That's a completely reasonable approach.

I really wish there was a good way to estimate the accidents per near miss ratio. Maybe medical mistakes? They have drastic consequences if you mess up, but involve a lot of routine paperwork. But this assumes that the dominant factors in the ratio are severity of consequences. (Probably a reasonable assumption. Spikes on steering wheels make better drivers, and bumpers make less careful forklift operators.) I'll look into this when I get a chance.

comment by DanArmak · 2015-11-23T15:30:50.566Z · LW(p) · GW(p)

I would expect a very high proportion of near-misses to stay secret. You don't make yourself look good by telling the world you nearly accidentally triggered an x-risk event. There's a huge incentive to cover it up.

Replies from: turchin
comment by turchin · 2015-11-23T16:48:12.210Z · LW(p) · GW(p)

That is true. I think we know only 1 from 10 near-misses in nuclear weapons field. Underreporting of near misses is well known problem in other domains

comment by PhilGoetz · 2015-10-29T03:17:14.575Z · LW(p) · GW(p)

Such near-misses should be divided into those with natural causes, and those caused by humans. Sometimes the distinction is fuzzy; for instance, the Lyme epidemic in the US was pretty obviously caused by covering the northeast US with lawns, killing all the wolves, and severely restricting hunting, so that we have a deer population several times larger than a century ago in more regular contact with humans.

My concern is that the number of human-caused near-misses has increased over the past century, and it's hard to imagine this stopping or even slowing. What plausible scenario can give free intelligent life on Earth a life expectancy of another thousand years?

Replies from: turchin
comment by turchin · 2015-11-02T14:05:38.442Z · LW(p) · GW(p)

If safety level drastically rise, it will prevent future near-misses. But to do it we need some kind of fail-safe and high-intelligent control mechanism. It easy to say that it will be AI, but we know that its creation has its own risks.

I have a map of plans of x-risks prevention, which include all known ideas how to prevent such risks, but I not sure that it will work.