[Paper] Surviving global risks through the preservation of humanity's data on the Moon

post by avturchin · 2018-03-04T07:07:20.808Z · LW · GW · 5 comments

Contents

5 comments

My, with David Denkenberger, article about surviving global risks through the preservation of the data on the Moon has been accepted in Acta Astronautica. Such data preservation is similar to the digital immortality with the hope that next civilization on Earth will return humans to life.

I also call this "plan С" of x-risks prevention, where plan A is stopping global catastrophe and plan B is surviving it in a refuge. The plan B was already covered in another my article about Aquatic Refuges (that is nuclear submarines), published in Futures. 

Plan C could be done rather cost-effectively by adding some eternal data carriers to many planned space crafts like Arch Mission is planning to do.

Link: https://www.sciencedirect.com/science/article/pii/S009457651830119X

The article is behind the paywall but the preprint is here: https://philpapers.org/rec/TURSGR

Abstract: Many global catastrophic risks are threatening human civilization, and a number of ideas have been suggested for preventing or surviving them. However, if these interventions fail, society could preserve information about the human race and human DNA samples in the hopes that the next civilization on Earth will be able to reconstruct of Homo Sapience and our culture. This requires information preservation of an order of magnitude of 100 million years, a little-explored topic thus far. It is important that a potential future civilization will discover this information as early as possible, thus a beacon should accompany the message in order to increase visibility. The message should ideally contain information about how humanity was destroyed, perhaps including a continuous recording until the end. This could help the potential future civilization to survive. The best place for long-term data storage is under the surface of the Moon, with the beacon constructed as a complex geometric figure drawn by small craters or trenches around a central point. There are several cost-effective options for sending the message as opportunistic payloads on different planned landers.

Keywords: Global catastrophic risks, existential risks, moon, time-capsule, METI

Highlights:     

5 comments

Comments sorted by top scores.

comment by ESRogs · 2018-03-04T21:55:06.296Z · LW(p) · GW(p)
This requires information preservation of an order of magnitude of 100 million years

Is this meant to be the (order of magnitude of the) expected time until another intelligent species evolves elsewhere in the galaxy and spreads as far as earth?

I would expect a much larger number (given that we're currently at 1 intelligent civ per ~10 billion years).

Replies from: vanessa-kosoy, avturchin
comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-04T22:15:57.954Z · LW(p) · GW(p)

Don't forget that the Great Filters might be far before the evolution of homo sapiens. I guess the expected time depends on the details of the extinction event. If humanity is destroyed but, for example, mammals (or even primates) survive, then 10^8 years doesn't sound implausible for another intelligent species to evolve. If, let's say, only microscopic organisms survive, then... Well, then intelligent life will probably not evolve on Earth again since the sun will cause the oceans to boil off in about 10^9 years.

comment by avturchin · 2018-03-05T05:46:39.535Z · LW(p) · GW(p)

No, it is very approximate expected time until new intelligent life appear on Earth, and don't include alien intelligence.

comment by Vanessa Kosoy (vanessa-kosoy) · 2018-03-04T07:30:22.445Z · LW(p) · GW(p)

Interesting! I haven't read the paper, but I wonder whether you consider the question of, how the values of this hypothetical "next civilization" might radically depart from our own values, and whether it might be the case that we don't want them to be able to reconstruct us (because of the things they would want to do to us)?

Replies from: avturchin
comment by avturchin · 2018-03-04T07:54:57.996Z · LW(p) · GW(p)

I wrote in the article that our relation with the next civilization is the form of acausal deal: we provide to them information about how we fought global risks and failed, which will help them to survive, and they provide to us "resurrection".

I didn't touch the problem that we may not like the resurrection, as it is more general problem applicable to cryonics and even to life extension: what if I will survive until 22 century and will not like the values of people (or AIs) who will dominate in that time?

I don't think that their values will radically depart from our values, because human values are convergent product of socio-biological evolution - and the next civilization will most likely have similar evolutionary pressure on values. However, even so-called human values may be non-pleasant - think about a zoo.

But anyway, I think that to be alive is better than to be dead, except the case of infinite torture, and I don't see the reason to recreate humans only to put them in situation of infinite torture. Surely, some unpleasant moments could happen, but it is also part of human life even here on Earth.