Caught in the glare of two anthropic shadows
post by Stuart_Armstrong · 2013-07-04T19:54:16.922Z · LW · GW · Legacy · 10 commentsContents
10 comments
This article consists of original new research, so would not get published on Wikipedia!
The previous post introduced the concept of the anthropic shadow: the fact that certain large and devastating disasters cannot be observed in the historical record, because if they had happened, we wouldn't be around to observe them. This absence forms an “anthropic shadow”.
But that was the result for a single category of disasters. What would happen if we consider two independent classes of disasters? Would we see a double shadow, or would one ‘overshadow’ the other?
To answer that question, we’re going to have to analyse the anthropic shadow in more detail, and see that there are two separate components to it:
- The first is the standard effect: humanity cannot have developed a technological civilization, if there were large catastrophes in the recent past.
- The second effect is the lineage effect: humanity cannot have developed a technological civilization, if there was another technological civilization in the recent past that survived to today (or at least, we couldn't have developed the way we did).
To illustrate the difference between the two, consider the following model. Segment time into arbitrarily “eras”. In a given era, a large disaster may hit with probability q, or a small disaster may independently hit with probability q (hence with probability q2, there will be both a large and a small disaster). A small disaster will prevent a technological civilization from developing during that era; a large one will prevent such a civilization from developing in that era or the next one.
If it is possible for a technological civilization to develop (no small disasters that era, no large ones in the preceding era, and no previous civilization), then one will do so with probability p. We will assume p constant: our model will only span a time frame where p is unchanging (maybe it's over the time period after the rise of big mammals?)
Assume a technological civilization develops during a given era (in which, of course, there is no disasters). Let _ denotes no disaster, ▄ denotes a small disaster only, and █ denotes a large disaster (with or without a small disaster as well). Then the possible past sequences that end in the current era (which is a _ by definition) can be divided into sequences that end in the following ways (the anthropic shadow is the first row):
█ ▄ _ | q2(1-q) | q2(1-q)/Ω |
---|---|---|
█ _ | q | 0 |
▄ ▄ _ or _ ▄ _ | q(1-q)2 | q(1-q)2/Ω |
█ _ ... _ or ▄ _ ... _ | (1-q)2 | (1-(1-q)2)*((1-q2)(1-p)/(1-(1-q2)(1-p)))/Ω |
The first column gives the probabilities of these various pasts, without any anthropic correction. The second column applies the anthropic correction, essentially changing some of the probabilities and then renormalising by Ω, which is the sum of the remaining probabilities, namely q2(1-q) + q(1-q)2 + (1-(1-q)2)*((1-q2)(1-p)/(1-(1-q2)(1-p))). Don’t be impressed by the complexity of the formula: the ideas are key.
The standard anthropic effect rules out the second row: we can’t have large disasters in the previous era. The lineage effect reduces the probability of the fourth row: we have less chance of developing a technological civilization in this era, if there are many opportunities for a previous one to develop. Both these effects increase the relative probability of the first row, which is the anthropic shadow.
The first thing to note is that the standard effect is very strong for high q. If q is very close to 1, then the third and fourth rows, being multiples of (1-q)2, are much less likely that the first row, which has a single power of (1-q). Hence an anthropic shadow is nearly certain.
The lineage effect is weaker. Even if p=1, the only effect is to rule out the fourth row. The first and the third row remain as possibilities, with ratios of q2:q(1-q): hence we still need a reasonable q to get an anthropic shadow. If we break down the scale of the disaster into more than two components, and insist on a strict anthropic shadow (regularly diminishing disaster intensity), then the lineage effect becomes very weak indeed. If the data is poor, though, or if we allow approximate shadows (if for instance we consider the third row as an anthropic shadow: civilization appeared as soon after the small disaster as it could), then the lineage effect can be significant.
What has this got to do with multiple disasters? If we look at meteor impacts and (non-meteor-caused) supervolcanoes, what should we expect to see? The simple rule is that the standard anthropic shadows of two classes of disasters combine (we see two anthropic shadows), while the lineage effect of the most devastating disaster dominates.
How can we see this?
Let’s ignore the lineage effect for the moment, and let q and r be the probabilities for the two classes of disasters. Then instead of having four situations, as above, we have sixteen: four each for the disasters, independently. We can represent this by the following table:
q2(1-q) | q | q(1-q)2 | (1-q)2 | |
---|---|---|---|---|
r2(1-r) | ||||
r | ||||
r(1-r)2 | ||||
(1-r)2 |
Then applying standard anthropic effects means removing the second row, then removing the second column (or vice versa), and then renormalising. The anthropic effect for the first class of disasters moves the probability of its anthropic shadows from q2(1-q) to q2. While the joint probability of the combined anthropic shadow (the top left box in red) is moved from q2(1-q)*r2(1-r) to q2*r2. The two shadows are independent.
In contrast, the lineage effect rules out long series of disaster-free eras before our own era. If there is a possibility for a long series (q and r being low), then the lineage effect makes a big impact. But the likely number of disaster-free eras is determined by the highest of q and r. If q and r are both 1%, then we’d expect something like 25 disaster-free eras in a row. If q is 10% and r is 1%, then we’d expect something like 5 disaster-free eras in a row - the same as if r was zero, and we were facing a single disaster.
So though the real-world data is probably too poor to conclude anything, the above result raises the possibility that we could estimate the lineage effect independently, by looking at the anthropic shadows of unrelated disasters - and thus get another way of calculating the probability of technological civilizations such as our own emerging.
10 comments
Comments sorted by top scores.
comment by OrphanWilde · 2013-07-05T10:38:40.157Z · LW(p) · GW(p)
An additional possibility in this vein which may throw off any calculations is that evolution -needs- to be rebooted a few times for a technological race to develop - that is, catastrophic disasters are a prerequisite of sapient life. The character of evolution changes over time (from big changes like sexual reproduction to small changes like new proteins becoming bioavailable), and there may be a critical series of evolutionary pressures in order to generate sapience.
Replies from: turchin↑ comment by turchin · 2015-07-23T12:26:23.619Z · LW(p) · GW(p)
Yes, human general intelligence mostly resulted from constantly changing environment. This will probably caused by Ice ages. So we more probably find our selves in the period of climate instability, and it rises probability of future extreme climate changes.
Replies from: OrphanWilde↑ comment by OrphanWilde · 2015-07-30T17:36:48.901Z · LW(p) · GW(p)
Not necessarily. Using climate as an example (asteroids might be another, or volcanic activity, or any number of other mass-extinction causing events), given that extreme climate changes are required for intelligence, extreme climate changes are necessarily required in the past. However, general climate stability may be required for life to survive at all. It may be the case that intelligent life requires a degree of climate instability that can't support life over long timeframes in order to evolve in short timeframes, in which case intelligent life should only appear on planets with inherently stable climates that have had a series of improbable one-off events resulting in an apparently unstable climate. (Or on planets with inherently unstable climates that have had a series of improbable one-off events resulting in an apparently stable climate.)
That is, geologic history might have an anthropomorphic bias - it had to happen a certain way in order for us to evolve, regardless of the actual probability of it happening that way. The planet could be prone to disasters exactly as they have happened, a "moderate" inherent stability level for the planet - or the disasters could have been improbable one-off events on an inherently stable planet - or the current stable situation could be an improbable one-off event on an inherently unstable planet.
I think, overall, we can't really take history for granted.
Replies from: turchin↑ comment by turchin · 2015-07-30T17:47:21.143Z · LW(p) · GW(p)
This line or reasoning is known as anthropic shadow. The worst case here is that a catastrophe is long overdue, and our world is in meta-stable condition about it. Small our actions could provoke it. (Like methane-hydrates release could result in catastrophic global warming)
comment by kilobug · 2013-07-05T09:32:14.283Z · LW(p) · GW(p)
Any reason to use the same probability for large and small disasters ? Intuitively I would say small disasters are much more frequent than large ones.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2013-07-05T10:11:15.392Z · LW(p) · GW(p)
Only to keep the math simple! That assumption was enough to get the points I was making.
The results are valid in general, but this was the simplest one I could construct to illustrate the point.
comment by JoshuaZ · 2013-07-04T20:21:17.957Z · LW(p) · GW(p)
So though the real-world data is probably too poor to conclude anything, the above result raises the possibility that we could estimate the lineage effect independently, by looking at the anthropic shadows of unrelated disasters - and thus get another way of calculating the probability of technological civilizations such as our own emerging.
This is a really neat idea. Do you know if you or anyone else is working on making this sort of estimate?
comment by turchin · 2015-07-23T12:23:55.968Z · LW(p) · GW(p)
The main problem with anthropic shadow is that we probably find our selves in the world which is on the verge of large catastrophe. It may be long overdue. It may already happen in most earth-like worlds but we could find our selves in only in one that lucky survive. But the condition for the catastrophe may be present.
It means that our world may be more unstable for small actions, like deep dreeling (supervolcanos) or global warming (may be transition to Venerian atmosphere is long overdue)
comment by firstorderpredicate · 2013-07-04T23:37:49.833Z · LW(p) · GW(p)
Segment time into arbitrarily “eras”
Trivial nitpick: Missing word between arbitrarily and "eras". I couldn't quite work out from context whether it should be 'large', 'small', or 'sized'. Naturally, it doesn't affect the argument, being arbitrary.
Replies from: DanielLC