[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI

post by avturchin · 2018-08-28T21:32:16.717Z · LW · GW · 2 comments

Contents

2 comments

[edit: it looks like immediately after publishing the paper, the journal becomes extinct, so the link is no longer working]

My article on the topic has been finally published, 10 years after first draft. I have discussed the problem before [LW · GW] on LW. The preprint, free of paywall, is here.

The main difference between the current version and my previous post is that I concluded that such attack is less probable, because if we take into account distribution in the Universe of the naive-our-level-civilizations and civilizations which has powerful AI and are SETI-senders, when the attack become possible, only if most naive civilizations go extinct before the creation of the AI. In that case, succumbing to a SETI-attack may be net positive, as the chance that it is a message from a benevolent alien AI becomes our only way to escape inevitable extinction. Anyway, we should be cautious, if we get any alien message, especially if it will have descriptions of computers and programs to them.

2 comments

Comments sorted by top scores.

comment by Eugen · 2018-08-30T11:56:47.207Z · LW(p) · GW(p)

Just out of curiosity: How probable do you think any SETI-contact will turn out to be AI-initiated as opposed to biological (in the broadest possible sense of that word)?

Replies from: avturchin
comment by avturchin · 2018-08-30T21:49:46.273Z · LW(p) · GW(p)

My estimations:

P(SETI) =0.01

P(AI|SETI)=0.99