Snyder-Beattie, Sandberg, Drexler & Bonsall (2020): The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare

post by Kaj_Sotala · 2020-11-24T10:36:40.843Z · LW · GW · 16 comments

This is a link post for https://www.liebertpub.com/doi/full/10.1089/ast.2019.2149

Copying over Anders Sandberg's Twitter summary of the paper:

There is life on Earth but this is not evidence for life being common in the universe! This is since observing life requires living observers. Even if life is very rare, the observers will all see they are on planets with life. Observation selection effects need to be handled!

Observer selection effects are annoying can produce apparently paradoxical effects such that your friends on average have more friends than you or that our existence "prevents" recent giant meteor impacts. But one can control for them with some ingenuity

Life emerged fairly early on Earth: evidence that it is easy and common? Not so fast: if you need multiple hard steps to evolve an observer to marvel at it, then on those super-rare worlds where observers show up life statistically tend to be early.

If we have N hard steps (say; life, good genetic coding, eukaryotic cells, brains, observers) as difficulty goes to infinity in cases where all steps succeed before biosphere ends they become equidistant between first and last habitability.

That means that we can take observed timings and calculate backwards to get probabilities compatible with them, controlling for observer selection bias.

Our argument builds on a chain from Carter's original argument and its extensions to Bayesian transition models. I think our main addition here is using noninformative priors.

https://royalsocietypublishing.org/doi/10.1098/rsta.1983.0096

https://arxiv.org/abs/0711.1985v1

http://mason.gmu.edu/~rhanson/hardstep.pdf

https://www.pnas.org/content/pnas/109/2/395.full.pdf

The main take-home message is that one can rule out fairly high probabilities for the transitions, while super-hard steps are compatible with observations. We get good odds on us being alone in the observable universe.

If we found a dark biosphere or life on Venus that would weaken the conclusion, similarly for big updates on when some transitions happened; we have various sensitivity checks in the paper.

Our conclusions (if they are right) are good news if you are worried about the Great Filter. we have N hard filters behind us, so the empty sky is not necessarily bad news. We may be lonely but have much of the universe for ourselves.

Another cool application is that this line of reasoning really suggests that M-dwarf planets must be much less habitable than they seem: otherwise we should expect to be living around one, since they are so common compared to G2 stars.

Personally I am pretty bullish about M-dwarf planet habitability (despite those pesky superflares), but our result suggests that there may be extra effects impairing them. They need to be pretty severe too: they need to reduce habitability probability by a factor of over 10,000. 

I see this paper as part of a trilogy started with our "anthropic shadows" paper and completed by a paper on observer selection effects in nuclear war near misses (coming, I promise!) Oh, and there is one about estimating remaining lifetime of biosphere.

The basic story is: we have a peculiar situation as observers. All observers do. But we can control a bit for this peculiarity, and use it to improve what we conclude from weak evidence, especially about risks. Strong evidence is better though, so let's try to find it!

The paper itself is available as open access.

16 comments

Comments sorted by top scores.

comment by Richard_Ngo (ricraz) · 2020-11-25T00:33:13.141Z · LW(p) · GW(p)

Thinking out loud:

Suppose we treat ourselves as a random sample of intelligent life, and make two observations: first, we're on a planet that will last for X billion years, and second that we emerged after Y billion years. And we're trying to figure out Z, the expected time that life would take to emerge (if planet longevity weren't an issue).

This paper reasons from these facts to conclude that the Z >> Y, and that (as a testable prediction) we'll eventually find that planets which are much longer-lived than the Earth are probably much less habitable for other reasons, because otherwise we would almost certainly have emerged there.

But it seems like this reasoning could go exactly the other way. In particular, why shouldn't we instead reason: "We have some prior over how habitable long-lived planets are. According to this prior, it would be quite improbable if Z >> Y, because then we would have almost definitely found ourselves on a long-lived planet.

So what I'm wondering is, what licenses us to ignore this when doing the original bayesian calculation of Z?

comment by Lanrian · 2020-11-25T12:02:14.129Z · LW(p) · GW(p)

We're not licensed to ignore it, and in fact such an update should be done. Ignoring that update represents an implicit assumption that our prior over "how habitable are long-lived planets?" is so weak that the update wouldn't have a big effect on our posterior. In other words, if the beliefs "long-lived planets are habitable" and "Z is much bigger than Y" are contradictory, we should decrease our confidence in both; but if we're much more confident in the latter than the former, we mostly decrease the probability mass we place on the former.

Of course, maybe this could flip around if we get overwhelmingly strong evidence that long-lived planets are habitable. And that's the Popperian point of making the prediction: if it's wrong, the theory making the prediction (ie "Z is much bigger than Y") is (to some extent) falsified.

comment by James_Miller · 2020-11-24T12:51:14.401Z · LW(p) · GW(p)

At the end of Section 5.3 the authors write "So far, we have assumed that we can derive no information on the probability of intelligent life from our own existence, since any intelligent observer will inevitably find themself in a location where intelligent life successfully emerged regardless of the probability. Another line of reasoning, known as the “Self-Indication Assumption” (SIA), suggests that if there are different possible worlds with differing numbers of observers, we should weigh those possibilities in proportion to the number of observers (Bostrom, 2013). For example, if we posit only two possible universes, one with 10 human-like civilizations and one with 10 billion, SIA implies that all else being equal we should be 1 billion times more likely to live in the universe with 10 billion civilizations. If SIA is correct, this could greatly undermine the premises argued here, and under our simple model it would produce high probability of fast rates that reliably lead to intelligent life (Fig. 4, bottom)...Adopting SIA thus will undermine our results, but also undermine any other scientific result that would suggest a lower number of observers in the Universe. The plausibility and implications of SIA remain poorly understood and outside the scope of our present work."  

I'm confused, probably because anthropic effects confuse me and not because the authors made a mistake.  But don't the observer selection effects the paper uses derive information from our own existence, and if we make use of these effects shouldn't we also accept the implications of SIA?  Should rejecting SIA because it results in some bizarre theories cause us to also have less trust in observer selection effects?

comment by Zumsel · 2020-11-24T20:38:06.892Z · LW(p) · GW(p)

When using SSA (which I think the authors do implicitly), you can exclude worlds which contain no observer "like you", but there's no anthropic update to the relative probabilities of the worlds that contain at least one observer "like you". When using SIA, the probability of each worlds gets updates in proportion to the number of observers.

Consider a variant of "god's coin toss". God has a device that outputs A, B, or C with equal probability. When seeing A, god creates 0 humans, when seeing B god creates 1 human, and when seeing C god creates 2 humans. You're one of the humans created this way and don't know how many other humans have been created. What should be your probability distribution over {A, B, C}? According to SSA, B and C should have probability 1/2 each, while according to SIA, B has probability 1/3 and C has probability 2/3.

The fact that SSA has a discontinuity between zero and any positive number of observers is one of the standard arguments against SSA, see e.g. argument 4 against SSA here: 

https://meteuphoric.com/anthropic-principles/

comment by James_Miller · 2020-11-25T03:43:39.668Z · LW(p) · GW(p)

Thanks, that's a very clear explanation. 

comment by avturchin · 2020-11-24T14:11:55.176Z · LW(p) · GW(p)

SIA also implies that we are likely live in the world with interstellar panspermia and many inhabitable planets in our galaxy, as I explored here. In that case, the difficulty of abiogenesis is not a big problem as there will be many planets inseminated with life from one source.

 

Moreover, SIA and SSA seems to converge in very-very large universe where all possible observers exist: in it I will find myself in the region where most observers exist, and it - with some caveats - will be a region with the high concentration of observers.

comment by James_Miller · 2020-11-25T13:02:25.713Z · LW(p) · GW(p)

Yes for high concentration of observers, and if high tech civilizations have strong incentives to grab galactic resources as quickly as they can  thus preventing the emergence of other high tech civilizations, most civilizations such as ours will exist in universes that have some kind of late great filter to knock down civilizations before they can become spacefaring.

comment by avturchin · 2020-11-25T15:22:21.537Z · LW(p) · GW(p)

Berezin suggested that if the first civilisation will kill all other civilizations, than we are this first civilisation. 

Also, if panspermia is true, the age of civilizations will be similar, and several civilizations could become first almost simultaneously in one galaxy, which creates interesting colonisation dynamics. 

comment by Troy Macedon (troy-macedon) · 2020-11-28T13:49:55.796Z · LW(p) · GW(p)

So we need to kill all other civilizations that we encounter in the future. AND we need to do it before they reach their version of 2020AD with technology? It doesn't help us if we kill them off after this. But we don't actually need to wipe them out entirely, just force their development to stop at some pre-technological point. Given that we are already past this point, we don't need to fear another version of ourselves. We'd even be able to update this imposed hard stop after a while. 

comment by avturchin · 2020-11-28T15:53:08.256Z · LW(p) · GW(p)

We may not actually kill them, but we prevent their appearance in the future by colonisation of their planets millions of years before a civilisation will have chance to appearance on them. This is the Berezin's idea.  

comment by abramdemski · 2020-11-28T05:08:55.703Z · LW(p) · GW(p)

Does this have a doomsday-argument-like implication that we're close to the end of the Earth's livable span, because if life takes a long time to evolve observers then it's overwhelmingly probable that consciousness arises shortly before the end of the livable span?

comment by Lanrian · 2020-11-25T16:02:05.169Z · LW(p) · GW(p)

The paper lists "intelligence" as a potentially hard step, which is of extra interest for estimating AI timelines. However, I find all the convergent evolution described in section 5 of this paper (or more shortly described in this blogpost [LW · GW]) to be pretty convincing evidence that intelligence was quite likely to emerge after our first common ancestor with octopuses ~800 mya; and as far as I can tell, this paper doesn't contradict that.

comment by Richard_Ngo (ricraz) · 2020-11-28T21:35:12.619Z · LW(p) · GW(p)

I think it might be quite hard to go from dolphin- to human-level intelligence.

I discuss some possible reasons in this post [AF · GW]:

I expect that most animals [if magically granted as many neurons as humans have] wouldn’t reach sufficient levels of general intelligence to do advanced mathematics or figure out scientific laws. That might be because most are too solitary for communication skills to be strongly selected for, or because language is not very valuable even for social species (as suggested by the fact that none of them have even rudimentary languages). Or because most aren’t physically able to use complex tools, or because they’d quickly learn to exploit other animals enough that further intelligence isn’t very helpful, or...

comment by Lanrian · 2020-11-29T23:02:02.664Z · LW(p) · GW(p)

The time it took to reach human-level intelligence (HLI) was quite short, though, which is decent evidence that HLI is easy. Our common ancestor with dolphins was just 100mya, whereas there's probably more than 1 billion years left for life on Earth to evolve.

Here's one way to think about the strength of this evidence. Consider two different hypotheses:

  • HLI is easy. After our common ancestor with dolphins, it reliably takes N million years of steady evolutionary progress to develop HLI, where N is uniformly distributed.
  • HLI is hard. After our common ancestor with dolphins, it reliably takes at least N million years (uniformly distributed) of steady evolutionary progress, and for each year after that, there's a constant, small probability p that HLI is developed. In particular, assume that p is so small that, if we condition on HLI happening at some point (for anthropics reasons), the time at which HLI happens is uniform between the end of the N million years and the end of all life on Earth.

Lets say HLI emerged on Earth exactly 100mya after our common ancestor with dolphins. After our common ancestor with dolphins, lets say there were 1100 million years remaining for life to evolve on Earth (I think it's close to that). We can treat N as being distributed uniformly between 1 and 100, because we know it's not more than 100 (our existence contradicts that). If so:

  • P(HLI at 100my | HLI is easy) =
  • P(HLI at 100my | HLI is hard) =

Thus, us evolving at 100my is roughly a 10:1 update in favor of HLI being easy.

(Note that, since the question under dispute is the ease of getting to HLI from dolphin intelligence, counting from 100mya is really conservative; it might be more appropriate to count from whenever primates acquired dolphin intelligence. This could lead to much stronger updates; if we count time from e.g. 20mya instead of 100mya, the update would be 50:1 instead of 10:1, since P(HLI at 20my | HLI is easy) would be 1/20.)

This is somewhat but not totally robust to small probabilities of variations. E.g. if we assign 20% chance to life actually needing to evolve within 200 million years after our common ancestor with dolphins, we get:

  • P(HLI at 100my | HLI is easy) =
  • P(HLI at 100my | HLI is hard) =

So the update would be more like 1:0.22 ~ 4.5:1 in favor of HLI is easy.

If you think dolphin intelligence is probably easy, I think you shouldn't be that confident that HLI is hard, so after updating on earliness, I think HLI being easy should be the default hypothesis.

comment by avturchin · 2020-11-25T20:35:55.201Z · LW(p) · GW(p)

I also have a following question: which order of transitions will imply that Earth is nor rare? One answer is that time until oceans' evaporation will be much longer, like not 1, but 4 billion years - but it didn't account timing of the 4 main transitions. 

But imagine that each transition typically has rate 1 time in 1 billions years. In that case having 4 transitions in 4 billion years seems pretty normal. 

If we assume that typical time for each transition is 10 billion years, current Earth age will be not normal, but it is not informative as we got just what we assumed. 

comment by RedMan · 2020-11-25T16:38:09.048Z · LW(p) · GW(p)

Of 'we are first, we are freaks, we are fucked' categories of great filter explanations, I think (consistent with this paper) we are definitely freaks, it looks like we may be first (at least in the parts of the universe that might in theory be reachable with existing physics/von neumann probes), and the jury is out on whether we are currently fucked (I'm a pessimist, I think we might be like the patient who ate a bottle of Tylenol, feeling fine, but definitely dead in a few days due to impending liver failure)