Implications of the Grabby Aliens Model

post by harsimony · 2021-12-06T18:34:44.985Z · LW · GW · 3 comments

This is a link post for https://harsimony.wordpress.com/2021/12/05/implications-of-the-grabby-aliens-model/

Contents

  Notes
None
3 comments

Hanson’s Grabby Aliens Model is one way to explain the apparent lack of other intelligent life in the universe. Essentially, if you believe it’s possible for extraterrestrial civilizations to expand across the galaxy and transform large amounts of matter, then most observers (e.g. people) have to arrive earlier than these civilizations; since we wouldn’t exist if the extraterrestrials got here first. But others can explain it better than me, instead, I want to focus on the implications of the model assuming it’s true.

Hanson has previously claimed that the lack of alien civilizations suggests that it is typical for life to go extinct through an unknown process called the Great Filter. The Great Filter is pretty worrying, because it suggests there might be some unknown x-risk which is hard to avoid.

But the grabby aliens model offers an alternative. If we don’t see other civilizations because of selection effects, then we don’t have to worry as much about the great filter, and this reduces our expected probability of an existential catastrophe [1].

Grabby aliens also changes the value of existential risk reduction. On one hand, the upper bound on the size of our civilization is much smaller under grabby aliens since we must share the universe with many different civilizations. On the other hand, even if we went extinct, the universe wouldn’t remain empty since some other civilization would be there to take our place.

The presence of competing civilizations changes how we should prepare for the future. The universe fills up much faster if there are many many civilizations growing in parallel [2]. Moving quickly can allow humanity to gather a larger fraction of the universe for itself. Additionally, it is critical to develop tools for collaboration and resource sharing between alien species.

The possibility of meeting alien civilizations also means that we have to consider how our actions will effect our reputation [3]. Radio broadcasts have been leaving earth for a century. What do they say about us?

Being able to establish a reputation before meeting another civilization is crucial. The time period before we become aware of other civilizations gives us an opportunity to pre-commit to certain courses of action. For example, we might pre-commit to being friendly to cooperative aliens. Other civilizations may make similar commitments; but what is the equilibrium of this game?

If accurate, the grabby aliens model changes our understanding of life, existential risk, and the future of civilization. The implications just beginning to be explored and I am excited to see what people come up with next.

Notes

  1. This also raises the profile of ideas like the Genesis project. If life is more viable than we thought, it might be worthwhile to spread it to places we will never reach (or to plant allies in places we will eventually reach).

  2. This means that there are higher potential rewards via collaboration but also higher s-risks in the future.

  3. Reputational concerns are relevant for any long-lived agent originating from our civilization.

3 comments

Comments sorted by top scores.

comment by Jay Olson (stephan-olson) · 2022-08-09T01:12:38.211Z · LW(p) · GW(p)

Toby Ord and I wrote a paper that describes how "expanding cosmological civilizations" (my less-snappy but more descriptive term for "grabby civilizations") update our estimates of any late filters you might want to consider -- assuming SIA as anthropic school: https://arxiv.org/abs/2106.13348

Basically, suppose you have some prior pdf P(q) on the probability "q" that we pass any late filter. Then, considering expanding civilizations will tell you to update it to P(q) --> P(q)/q. And this isn't good, since it upweights low values of "q" (i.e. lower survival probability).

A general argument analogous to this was actually advanced by Katja Grace, long before we started studying expanding cosmological civilizations -- just in the context of regular galactic SETI. But the geometry of ECC's gives it a surprisingly simple, quantifiable form.

Replies from: harsimony
comment by harsimony · 2022-08-09T18:56:38.168Z · LW(p) · GW(p)

Interesting!

So if I am understanding correctly, SIA puts more weight on universes with many civilizations, which lowers our estimate of survival probability q. This is true regardless of how many expanding civs. we actually observe.

The latter point was surprising to me, but on reflection, perhaps each observation of an expanding civ also increases the estimated number of civilizations. That would mean that there are two effects of observing an expanding civ: 1) Increased the feasibility of passing a late filter 2) increasing the expected number of civilizations that didn't pass the filter. These effects might cancel out leaving one with no net update.

So I was wrong in saying that Grabby Aliens should reduce our x-risk estimates. This is interesting because in a simple discounting model, higher baseline risk lowers the value of trying to mitigate existential risks:

https://www.lesswrong.com/posts/JWMR7yg7fg3abpz6k/a-formula-for-the-value-of-existential-risk-reduction-1 [LW · GW]

This implies that a person hoping to encourage work on existential risks may want to convince others that it's feasible to expand into space.

I wonder about different approaches to SIA. For example, could a different version of SIA be used to argue that we are likely the ancestors of a large civilization? Would this up-weight the chances of cosmic expansion?

comment by mako yass (MakoYass) · 2022-03-07T01:33:29.906Z · LW(p) · GW(p)

On the other hand, even if we went extinct, the universe wouldn’t remain empty since some other civilization would be there to take our place.

Yes, but most of my existential risk comes from AGI Misalignment, which would not follow this law, because a Misaligned AGI is likely to spread up and fill our volume and be as immovable to alien civs as we would have been.

Moving quickly can allow humanity to gather a larger fraction of the universe for itself.

The incentives to move quickly were actually a lot greater before grabby aliens, due to accelerating cosmological expansion. (I learned this in a nice Ord paper, but there is also this kurtzgesagt) We expected to lose about three galaxies every passing day that we failed to alight towards the stars. Under this model, if we don't get them, someone else will, which is not as bad, except insofar as, they would get to them slightly later, and a lot of energy would be lost as heat from the stars, and maybe a larger proportion of them would be unaligned AGI which we wouldn't love so much as other living species.

The possibility of meeting alien civilizations also means that we have to consider how our actions will effect our reputation

We will have changed a lot since those radio broadcasts were sent out, it wont be long, on a cosmic scale, between them receiving those and meeting us in person, and by the time we meet we will be able to talk quickly enough that the dialog will overwhelm any of that.

By the way, I came across this post in the process of writing up some own thoughts about the implications of the Grabby Aliens model, which I've now posted, and would recommend to any long-term strategy thinker: Grabby Aliens could be Good, could be Bad [LW · GW]