SIA won't doom you

post by Stuart_Armstrong · 2010-03-25T17:43:06.467Z · LW · GW · Legacy · 32 comments

Katja Grace has just presented an ingenious model, claiming that SIA combined with the great filter generates its own variant of the doomsday argument. Robin echoed this on Overcoming Bias. We met soon after Katja had come up with the model, and I signed up to it, saying that I could see no flaw in the argument.

Unfortunately, I erred. The argument does not work in the form presented.

First of all, there is the issue of time dependence. We are not just a human level civilization drifting through the void in blissful ignorance about our position in the universe. We know (approximately) the age of our galaxy, and the time elapsed since the big bang.

How is this relevant? It is relevant because all arguments about the great filter are time-dependent. Imagine we had just reached consciousness and human-level civilization, by some fluke, two thousand years after the creation of our galaxy, by an evolutionary process that took two thousand years. We see no aliens around us. In this situation, we have no reason to suspect any great filter; if we asked ourselves "are we likely to be the first civilization to reach this stage?" then the answer is probably yes. No evidence for a filter.

Imagine, instead, that we had reached consciousness a trillion years into the life of our galaxy, again via an evolutionary process that took two thousand years, and we see no aliens or traces of aliens. Then the evidence for a filter is overwhelming; something must have stopped all those previous likely civilizations from emerging into the galactic plane.

So neither of these civilizations can be included in our reference class (indeed, the second one can only exist if we ourselves are filtered!). So the correct reference class to use is not "the class of all potential civilizations in our galaxy that have reached our level of technological advancement and seen no aliens", but "the class of all potential civilizations in our galaxy that have reached our level of technological advancement at around the same time as us and seen no aliens". Indeed, SIA, once we update on the present, cannot tell us anything about the future.

But there's more. Let us lay aside, for the moment, the issue of time dependence. Let us instead consider the diagrams in Katja's post as if the vertical axis were time: all potential civilizations start at the same point, and progress at the same rate. Is there still a role for SIA?

The answer is... it depends. It depends entirely on your choice of prior. To illustrate this, consider this pair of early-filter worlds:

To simplify, I've flattened the diagram, and now consider only two states: human civilizations and basic lifeforms. And here are some late filter worlds:

Assign an equal prior of (1/4) to each one of these world. Then the prior probability of living in a late filter world is (1/4+1/4)=1/2, and the same holds for early filter worlds.

Let us now apply SIA. These boost the probability of Y and B at the expense of A and X. Y and B end up having a probability 1/3, while A and X end up having a probability 1/6. The postiori probability of living in a late filter world is (1/3+1/6)=1/2, and the same goes for early filter worlds. Applying SIA has not changed the odds of late versus early filters.

But people might feel this is unfair; that I have loaded the dice, especially by giving world Y the same prior as the others. It has too many primitive lifeforms; it's too unlikely. Fine then; let us give prior probabilities as follows:

X
Y
A
B
2/30
1/30 18/30 9/30

This world does not exactly over-weight the chance of human survival! The prior probability of a late filter is (18/30+9/30)=9/10, while that of an early filter is 1/10. But now let us consider how SIA changes those odds: Y and B are weighted by a factor of two, while X and A are weighted by a factor of one. The postiori probabilities are thus:

X
Y
A
B
1/20
1/20 9/20 9/20

The postiori probability of a late filter is (9/20+9/20)=9/10, same as before: again SIA has not changed the probability of where the filter is. But it gets worse; if, for instance, we had started with the priors:

X
Y
A
B
1/30
2/30 18/30 9/30

This is the same as before, but with X and Y inversed. The early filter still has only one chance in ten, a priori. But now if we apply SIA, the postiori odds of X and Y are 1/41 and 4/41, totalling of 5/41 > 1/10. Here applying SIA has increased our chances of survival!

In general there are a lot of reasonable priors over possible worlds were SIA makes little or no difference to the odds of the great filter, either way.

 

Conclusion: Do I believe that this has demonstrated that the SIA/great filter argument is nonsense? No, not at all. I think there is a lot to be gained from analysing the argument, and I hope that Katja or Robin or someone else - maybe myself, when I get some spare time, one of these centuries - sits down and goes through various scenarios, looks at classes of reasonable priors and evidence, and comes up with a conclusion about what exactly SIA says about the great filter, the strength of the effect, and how sensitive it is to prior changes. I suspect that when the dust settles, SIA will still slightly increase the chance of doom, but that the effect will be minor.

Having just saved humanity, I will now return to more relaxing pursuits.

32 comments

Comments sorted by top scores.

comment by JGWeissman · 2010-03-25T18:09:32.180Z · LW(p) · GW(p)

Presentation issue:

Rather than expressing probabilities in reduced fraction form, it would make the argument easier to follow if groups of probabilities were expressed with common denominators.

So instead of

X:1/15, Y:1/30, A:3/5, B: 3/10

say

X:2/30, Y:1/30, A:18/30, B: 9/30

This makes it easy to compare the probablities using ratios of the numerators, with the denominators just there for normalization.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-03-26T18:24:14.829Z · LW(p) · GW(p)

Adjusted the presentation, thanks.

comment by KatjaGrace · 2010-03-25T23:24:24.062Z · LW(p) · GW(p)

Determining whether there is a filter is a separate issue to updating on the size of our 'reference class' in given scenarios. All that is needed for my argument is that there is apparently a filter at the moment.

You are correct that civilizations who know they are in the future or the past aren't added to our reference class for SIA purposes, but it looks to me like this makes no difference to the shift if the proportions of people in late filter and early filter worlds are the same across time, which I am assuming in a simple model, though you could of course complicate that.

" Indeed, SIA, once we update on the present, cannot tell us anything about the future."

For my argument it only need tell us about the present and the past. They can inform us on the future in the usual way (if we can work out where the filter has been in the past, chances are it hasn't just moved, which has implications for our future).

Given any particular world size, SIA means the filter is more likely to be late. Larger worlds with early filters can of course be made just as likely as smaller worlds with late filters, so if you double the size of the early filter worlds you look at, SIA makes no difference. If you were to include the one planet early filter world and the four planet late filter world in your original set, the usual shift toward late filter worlds would occur.

This doesn't seem a trick specific to SIA - you can do the same thing to avoid updating on many things. e.g. consider the following non anthropic example:

There are two urns. A has odd and even numbered balls in it, and B just has odd numbered balls. An urn will be chosen and some unknown number of balls pulled from it. You will be told afterwards whether number 17 comes up or not.

Number 17 did come up. Does this increase your posterior on the urn being B? Intuitively yes - around twice as many odd balls would have been drawn from the odd ball urn than the mixed one, giving twice as many opportunities for 17 to come up. But now consider these options:

X) Two balls drawn from even ball urn

Y) Four balls drawn from even ball urn

A) One ball drawn from odd ball urn

B) Two balls drawn from odd ball urn

With the same prior as in your example, you get the same results.

Conclusion: I don't think any of this makes much difference to the original argument.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-03-26T11:51:05.494Z · LW(p) · GW(p)

Given any particular world size, SIA means the filter is more likely to be late.

No it doesn't. It boost the probability of certain universes, some of which happen to have late filters - and some of which may have no filters at all. Consider the (very improbable) low-start simultaneous worlds universe, where it takes several billion years for life to get going, but life is never filtered at all, and now the galaxy is filled with civilizations at approximately our level. This universe is very unlikely - but SIA boosts its probability!

Now I believe that the total effect is to boost the probability of late filters - but this is very far from a proof.

Larger worlds with early filters can of course be made just as likely as smaller worlds with late filters, so if you double the size of the early filter worlds you look at, SIA makes no difference.

Unless I'm misunderstanding you, this is not the point. The effect of SIA depends on the relative probabilities of X and Y, and the relative probabilities of A and B - not, in any way, on the relative probabilities of A versus X or anything like that. I can make X and Y as unlikely as you want, and yet still be in a situation where SIA increases your probability of early filter.

Conclusion: I don't think any of this makes much difference to the original argument.

The way it was presented to me (and I may have misunderstood this, but I did ask Robin), future civilizations were included in the argument. Just removing them makes a huge difference to the odds. Whether past civilizations should be included is a more tricky point, and depends tremendously on your choice of priors.

comment by Jonathan_Graehl · 2010-03-26T09:17:14.828Z · LW(p) · GW(p)

Tangent: RNA-like replicators either arose by chance combination of more primitive (or non-) replicators somewhere on Earth, or arose elsewhere and came to earth on spaceships or as spores on inert projectiles.

Let's assume that Earth's first interesting replicator spontaneously arose somewhere in a very low probability mixing of chemicals, followed by exponential growth (it just had to happen once). If this happened relatively soon after conditions permitted, does that suggest that the filter for life given Earthlike planet is pretty easy to pass? I'd guess it does - for example, if life first arose a billion years later than it actually did, the filter is probably harder. But of course I'd hesitate to just perform the obvious calculation; I think you have to compute the probability given that we exist, and it's hard to figure out how likely that (something like) we exist if the first life arises earlier or later.

Does anyone know the best estimate for when some interesting replicator populated a significant part of Earth?

Do we have evidence of any pre-RNA replicator of significance? (another tangent, sorry).

Replies from: darius
comment by darius · 2010-03-28T09:09:38.485Z · LW(p) · GW(p)

does that suggest that the filter for life given Earthlike planet is pretty easy to pass?

Robin argued that it doesn't in his Great Filter article. If creating us takes several hard steps, hard enough that the expected time is much more than the time up to now, then for us to exist now we'd expect the first step to have happened quickly. (The detailed math is at a broken link.)

best estimate for when some interesting replicator populated a significant part of Earth?

There's evidence for very early life (e.g.) but I gather it's not nailed down.

Do we have evidence of any pre-RNA replicator of significance?

Here's a pop science article on a plausible precursor (which I can't competently evaluate).

comment by RobinHanson · 2010-03-26T03:00:51.192Z · LW(p) · GW(p)

A,B are worlds where the filter happens before life, and X,Y are where it happens before intelligence. You aren't including any worlds where the filter happens after where we are, so of course you don't see the main effect, of concluding it more likely to happen after now than before now. You say you are introducing inference based on time and not just development level, but I don't see you using that in your example.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-03-26T11:34:46.282Z · LW(p) · GW(p)

You say you are introducing inference based on time and not just development level, but I don't see you using that in your example.

That's why you don't see any worlds where the filter happens after where we are - these worlds are not in our reference class (to use outdated SSA terminology). We can't use SIA on them.

There still is a way of combining SIA with the filter argument; it goes something like:

1) Use SIA on the present time to show there are lots of civilizations at our level around now.

2) Use a distribution on possible universes to argue that 1) implies there were lots of civilizations at our level around before.

3) From 2), argue that the filter is in our future.

The problem is 2). There are universes in which there is no great filter, but whose probability is boosted by SIA - say, slow-start simultaneous worlds, where it takes several billion years for life to get going, but life is never filtered at all, and now the galaxy is filled with civilizations at approximately our level. This world is very unlikely - but SIA boosts its probability!

So until we have some sensible distributions over possible worlds with filters, we can't assert SIA+great filter => DOOM. I feel it's intuitively likely that SIA does increase doom somewhat, but that's not a proof.

Replies from: drnickbone, RobinHanson
comment by drnickbone · 2012-04-21T20:35:20.995Z · LW(p) · GW(p)

This dispute about 2) seems a little desperate to me as a way out of doom.

Surely there is high prior probability for universes whose density of civilizations does NOT rise dramatically at a crucial time close to our own (such that at around our time t/o ~ 13 billion years the density of civilizations at our level is high, whereas at times very slightly before t/o in cosmological terms, the density is very low)? If you assume that with high probability, lots of civilizations now implies lots of civilizations a million years ago (but still none of them expanded) then we do get a Doomish conclusion.

Incidentally, another approach is to argue that SIA favours "Big Worlds" (ones containing a spatially-infinite universe, or infinitely many finite universes). But then, among the Big Worlds, SIA doesn't further favour a high density of civilizations at our level (since all such Big Worlds have infinitely many civilizations anyway, SIA doesn't "care" whether they appear on the order of once per star system, or once per Galaxy, or less than once per Hubble volume). This approach removes Katja's particular argument to a "late" filter, but unfortunately it creates another argument instead, since when we now apply SSA we get the usual Doomsday shift - see http://lesswrong.com/lw/9ma/selfindication_assumption_still_doomed/

Broadly I've now looked at a number of versions of anthropic reasoning: SSA with and without SIA, variations of reference class a la Bostrom, and attempts to avoid anthropic reasoning completely (such as "full non-indexical conditioning"). Whichever way I cut it, I'm getting a "Doom" conclusion. I'm thinking of putting together a main post on this at some point.

comment by RobinHanson · 2010-03-26T12:25:41.351Z · LW(p) · GW(p)

Katja's example clearly included worlds with a filter past our level, and I see nothing wrong with her example.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-03-26T18:23:18.102Z · LW(p) · GW(p)

Her example makes no account for the time period at which each civilization reaches each stage. For her example to work, she'd have to come up with a model where civilizations appear and mature at different time intervals, then apply the SIA to civilizations at our level and in our current time period, and then show that this implies a late great filter.

It can be done, quite easily, but there are also models where SIA implies an early filter, or none at all. SIA boosts some worlds, and some of the worlds boosted by SIA have late filters, others have early filters.

The argument is not yet completed.

comment by drnickbone · 2012-05-01T19:26:03.068Z · LW(p) · GW(p)

Stuart - something is not clear in your diagram. Does world Y have twice as many planets as world B, or does it have the same number of planets, but twice as many planets with simple life-forms? (Same question for X and A).

If they have the same number of planets then world B also has an early filter rather than late filter (since it has lower probability of each planet developing life). So you're not strictly comparing late filter worlds vs early filter worlds.

If they have different numbers of planets, then you have another problem: why would the prior be such as to contain more planets in early-filter worlds, than in late-filter worlds? That seems unreasonable since a priori there is no relationship between size of world and lateness/earliness of filters. Also, we need to take account SIA shifting all the probability weight up to the largest worlds anyway (infinite size if the prior allows that), so we basically ignore the small worlds in any case. If we want to keep things finite to ensure the probability distribution is well-behaved, we should assume that all world models worth thinking about have, say, N = 3^^^3 planets.

comment by James_Miller · 2010-03-25T18:47:14.917Z · LW(p) · GW(p)

You wrote "the class of all potential civilizations in our galaxy that have reached our level of technological advancement at around the same time as us and seen no aliens"

Does this violate relativity by assuming there exists some absolute type of simultaneity?

Replies from: khafra, wnoise
comment by khafra · 2010-03-25T19:27:11.572Z · LW(p) · GW(p)

Considering the size of the galaxy relative to the age of the universe, I'd say replacing "the same time" with "the same spacelike interval" doesn't change the meaning enough to offset the distracting terminology.

comment by wnoise · 2010-03-25T20:50:37.127Z · LW(p) · GW(p)

It doesn't, because the motion of the galaxy sets a common reference frame that most objects in the galaxy do not depart greatly from, and did not depart greatly from in the evolution of stars, planets, and any possible life.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2010-03-26T11:26:58.379Z · LW(p) · GW(p)

Yep.

You could even make some argument that the cosmic background radition provides some sort of crude "universal standard of rest". More importantly, given the probable origins of the galaxy, any alien beings that have gone through relativistic accelerations relative to us will have have had less time to develop than us, not more.

comment by Rain · 2010-03-25T17:53:55.832Z · LW(p) · GW(p)

Did you see James_Miller's post on this same set of articles?

comment by timtyler · 2010-03-26T00:11:09.128Z · LW(p) · GW(p)

It is strange how people still seem to think this argument has something to do with doomsday - when what it actually about is a failure to make progress towards galactic civilisation.

I've pointed this out several times now - and so far nobody has even attempted to make a case for terminal setbacks - rather than roadblocks. Doomsday makes for better headlines, I guess.

[Update 2010-11-05] - K.G. now discusses such an case in her thesis - available from: http://meteuphoric.wordpress.com/2010/11/02/anthropic-principles-agree-on-bigger-future-filters/

Replies from: RobertWiblin, Jonathan_Graehl, James_Miller, rwallace
comment by RobertWiblin · 2010-03-26T01:23:36.077Z · LW(p) · GW(p)

That's because chances for us to go extinct seem many. If we are necessarily held back from space for thousands of years, it's very unlikely we would last that long just here on Earth.

Replies from: timtyler
comment by timtyler · 2010-03-26T09:42:35.259Z · LW(p) · GW(p)

Usually, the people who think this sort of thing count there not being any humans around as us "going extinct". That is all very well if you are fixated on the human form - but it is not a valid concept in the context of the Fermi paradox - since our post-human offspring are likely to be far more capable of executing the task of forming a galactic civilisation than we are.

If you think that there are many chances for civilization to go extinct - then I beg to differ. The chances of civilization going extinct are pretty miniscule - in my estimation. Discussion of that happening usually has far more to do with memes, mind viruses, manipulation, exploitation and paranoia than it does to do with actual risk.

comment by Jonathan_Graehl · 2010-03-26T09:19:24.404Z · LW(p) · GW(p)

I didn't reply because you're obviously right.

comment by James_Miller · 2010-03-26T00:25:21.036Z · LW(p) · GW(p)

What kind of terminal setback other than extinction, would stop humanity from making significant "progress towards galactic civilisation" sometime during, say, the next 100 million years?

Replies from: timtyler, Steko
comment by timtyler · 2010-03-26T09:17:20.765Z · LW(p) · GW(p)

A "terminal setback" is pretty similar to extinction, by definition. There are a few examples: life forming a single navel-gazing wireheading organism would be a pretty "terminal" setback.

However, we face many more roadblocks on the way to forming a galactic civilization. We have to master, intelligence, nanotechnology, fusion, space flight, and then we have to agree to devote large quantities of resources to interstellar colonisation.

I fully expect that we will do those things - but I take such a more detailed analysis as evidence that we are past most of the roadblocks - not as evidence that we are likely to accidentally annihilate ourselves!

comment by Steko · 2010-03-26T01:07:22.625Z · LW(p) · GW(p)

Colonies and expanding populations likely become irrelevant once you approach the level of technology necessary for intergalactic travel. Quite possibly communication as well.

Replies from: James_Miller
comment by James_Miller · 2010-03-26T01:17:08.303Z · LW(p) · GW(p)

If only a tiny percent of an advanced civilization wants to colonize new worlds then the civilization will grow at an exponential rate. Also, unless the laws of physics are very different from what we think, most civilizations would try to capture free energy and either turn off stars or capture all the energy being radiated from the stars.

Replies from: Steko
comment by Steko · 2010-03-26T02:14:10.169Z · LW(p) · GW(p)

I find all of these propositions questionable. It's not clear at all that they will need to (1) reproduce (2) relocate or (3) capture an absurd amount of free energy. We can speculate they might want to do any of those but the arguments that they won't seem just as strong.

I highly doubt there will be any disagreement about the merits and needs of colonization within a civilization capable of intergalactic travel - it will either be a good idea and they will agree or it will be a bad idea and they will agree not to.

Seeing no evidence of colonization (and knowing that if they all do it they If they will come into conflict with each other and risk their extinction) let's suggest they all decide not to do it is a reasonable possibility.

Then timtyler's point is easy to see: this isn't so much about doomsday as about a change in society that devalues reproduction and expansion. There may be very few to no more humans born after say 2300 AD. And that's because people don't need offspring to work in the fields anymore, don't fulfill their sexual needs like other animals, have incredibly inflated lifespans, etc.

Replies from: knb
comment by knb · 2010-03-26T05:05:04.965Z · LW(p) · GW(p)

I highly doubt there will be any disagreement about the merits and needs of colonization within a civilization capable of intergalactic travel - it will either be a good idea and they will agree or it will be a bad idea and they will agree not to.

There are such disagreements in our civilization. Why would more advanced civilizations stop having value disagreements? Unless almost all civilizations end in singletons, such values disputes seem likely.

There may be very few to no more humans born after say 2300 AD. And that's because people don't need offspring to work in the fields anymore, don't fulfill their sexual needs like other animals, have incredibly inflated lifespans, etc.

This is hugely unlikely. We currently have the ability to turn reproduction off. Yet many people go to extraordinary lengths to turn reproduction on. There are population subgroups within almost every major country that exhibit strong pro-natalist tendencies, and have preferences for large-to-huge families. There are many such groups just in the United States (fundamentalist Mormons, "Quiverfull" folks, Amish, Hutterites, etc. If even a small minority opts to reproduce and expand, they will have huge selective advantages over those who opt out of reproduction and expansion.

Non-expansion is not evolutionarily stable.

Replies from: Steko
comment by Steko · 2010-03-26T16:07:28.006Z · LW(p) · GW(p)

"We currently have the ability to turn reproduction off. Yet many people go to extraordinary lengths to turn reproduction on. "

This is true because there are still many good reasons to have children. I don't see any of these reasons being certain and compelling by the time intergalactic travel is possible. We haven't even scratched the surface on what technology is going to do to economics and already the maternity rate in prosperous countries is below replacement. We may well expand forever but I don't think it's at all obvious.

"If even a small minority opts to reproduce and expand, they will have huge selective advantages over those who opt out of reproduction and expansion."

What if a small minority wanted to kill everyone? As technology increases (to the point of allowing things like an independent faction to do this), you have to assume there would be strong pressures and protections in place to prevent the sort of factionalism that currently dominates. And if a large, technologically advanced majority doesn't want you to reproduce I'd guess you are not going to reproduce.

Replies from: knb
comment by knb · 2010-03-26T17:28:03.043Z · LW(p) · GW(p)

This is true because there are still many good reasons to have children.

Children have gone from being productive capital goods to consumption goods. I don't see any evidence that children are losing or will lose their value as consumption goods.

I'm saying that the zero population growth faction will be a tiny minority by the time a civilization grows large and advanced all the selection pressure works against the zero-population growth/non-expansion faction.

Replies from: Steko
comment by Steko · 2010-03-26T21:31:14.473Z · LW(p) · GW(p)

"Children have gone from being productive capital goods to consumption goods. I don't see any evidence that children are losing or will lose their value as consumption goods."

Wait -- the value of children recently changed fundamentally but we should expect no more change far in the future?

"I'm saying that the zero population growth faction will be a tiny minority by the time a civilization grows large and advanced."

This does not reconcile at all with current population trends of developed nations. The UN medium projection for 2050 has the entire world at 2.02. Go ahead and assume people still want children as consumption goods, data suggests that not enough of them want this to maintain even zero population growth beyond the current century.

Replies from: knb
comment by knb · 2010-03-27T04:58:29.047Z · LW(p) · GW(p)

Wait -- the value of children recently changed fundamentally but we should expect no more change far in the future?

No, there are expected changes. We should expect the transition of children from capital to consumption goods will continue as more places move away from subsistence farming and develop old-age pension systems. This will predictably shift selective advantage toward folks with active pro-natalist genes or memes.

This does not reconcile at all with current population trends of developed nations. The UN medium projection for 2050 has the entire world at 2.02. Go ahead and assume people still want children as consumption goods, data suggests that not enough of them want this to maintain even zero population growth beyond the current century.

Perhaps the growth of the high-fertility subgroups (like Mormons, Muslims, Quiverfulls, Amish, Cossacks, ultra-orthodox jews, etc.) will not outmatch the declining fertility of other groups by the end of the 21st century, but they will eventually. Their growth is rapid, exponential, and unbounded. The decline of the sub-replacement groups is slow and bounded (they can't decline to less than zero).

comment by rwallace · 2010-03-26T02:31:31.928Z · LW(p) · GW(p)

I have! http://lesswrong.com/lw/10n/why_safety_is_not_safe/

But basically, yes, I agree completely. I'm not convinced even by this version of the Doomsday Argument, not because I have a refutation to hand, but because the track record of this kind of philosophical reasoning in actually producing right answers is probably worse than random chance; that having been said, I can believe it could be valid, and that most intelligent species end up hamstringing themselves in the name of safety and spending all their energy on bickering about intra-species politics until their time runs out.