Why not use active SETI to prevent AI Doom?

post by RomanS · 2023-05-05T14:41:40.661Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    19 avturchin
    8 BrooksT
    5 Dagon
    3 the gears to ascension
    2 PeterMcCluskey
None
1 comment

Let's assume that Eliezer is right: soon we'll have an AGI that is very likely to kill us all. (personally, I think Eliezer is right).

There are several ways to reduce the risk, in particular: speeding up alignment research and slowing down capabilities research, by various means. 

One underexplored way to reduce the risk is active SETI (also known as METI). 

The idea is as follows:

The main advantage of the method is that it can be implemented by a small group of people within a few months, without governments and without billions of dollars. Judging by the running costs of the Arecibo Observatory, one theoretically can rent it for a year for only $8 million. Sending only a few hundred space messages could be even cheaper. 

Obviously, the method relies on the existence of an advanced alien civilization within a few light years from the Earth. The existence seems to be unlikely, but who knows. 

Is it worth trying?

Answers

answer by avturchin · 2023-05-05T17:31:12.996Z · LW(p) · GW(p)

It actually may work. But not because aliens will come to save us - there is no time. But because any signal we send in space will reach any star before intelligence explosion wave, and thus aliens may know potentially hostile nature of our AI.

Our AI will know all this, and if it wants to have better relation with aliens, it may invest some resources in simulating friendliness. Cheap for AI, cheap for us.

answer by BrooksT · 2023-05-05T15:40:36.887Z · LW(p) · GW(p)

So imagine you're living like you are, today, and somehow some message appears from a remote tribe that has never had contact with civilization before. The message reads "help! We've discovered fire and it's going to kill us! Help! Come quick!"

What do you do?

comment by Dagon · 2023-05-05T17:05:09.480Z · LW(p) · GW(p)

I'm not sure this analogy teaches us much.  A lot depends on what the surprise is - that there is a civilization there that knows how to communicate but hasn't yet, or that the civilization we've been leaving alone for Prime Directive reasons has finally discovered fire.   A lot depends on whether we take that fear seriously as well.

The answer could be any of:

  • come teach them to be safe, knowing it won't be the last interference we make.
  • open the interference floodgates - come study/exploit them, force them to be safe.
  • kill them before their fire escapes.
  • build firebreaks so they can't hurt others, but can advance by themselves.
answer by Dagon · 2023-05-05T14:48:54.345Z · LW(p) · GW(p)

I'm not sure how you envision "sending signals into space" as noticeably different from what we've been doing for the last 100 years or so.  Any civilization close enough to hear a more directed plea, and advanced enough to intervene in any way, is already monitoring the Internet and knows whatever some subset of us could say.

comment by ProgramCrafter (programcrafter) · 2023-05-05T15:21:47.409Z · LW(p) · GW(p)

Internet communications are mostly encrypted, so such a civilization would need to 1) be familiar with network protocols, particularly TLS, 2) inject its signals into our networks and 3) somehow capture a response.

comment by RomanS · 2023-05-05T14:55:02.330Z · LW(p) · GW(p)

I'm not sure our radio noise is powerful enough to be heard at interstellar distances. Something like the Arecibo message is much more likely to reach another solar system. 

It could also be important to specifically send a call for help. A call for help indicates our explicit consent to intervene, which could be important for an advanced civilization that has something like a non-intervention rule.

Replies from: Dagon
comment by Dagon · 2023-05-05T15:40:33.715Z · LW(p) · GW(p)

True enough.  My point is that anyone who CAN help is close enough and tech-advanced enough to have noticed the signals long ago, and gotten closer (at least sent listening stations) in order to hear.  Aliens who have not already done so, almost certainly can't.

The specific call for help is problematic in a different way.  The post is unclear exactly who has standing and ability to make the request.  

answer by the gears to ascension · 2023-05-05T16:00:39.150Z · LW(p) · GW(p)

Desiderata for an alien civ that saves us:

  • able to meaningfully fight a hard superintelligence
  • either not already grabby and within distance to get here, or already grabby and on their way to us
  • isn't simply a strict maximizer gone grabby itself
  • won't just want everything that came from earth dead as a result retributive feelings towards grabby alien spores sent out to grow on nearby planets by the generated maximizer

up to y'all whether the intersection of these criteria sounds likely

comment by RHollerith (rhollerith_dot_com) · 2023-05-06T15:32:00.178Z · LW(p) · GW(p)

able to meaningfully fight a hard superintelligence

I am pretty sure that the OP's main source of hope is that the alien civ will intervene before a superintelligence is created by humans.

Replies from: programcrafter
comment by ProgramCrafter (programcrafter) · 2023-05-06T16:19:40.378Z · LW(p) · GW(p)

This is not particularly necessary, because we can save at least our genetic information for possibility of being recreated in the future. So we can also hope for friendly alien civ to defeat superintelligence even if humanity is extinct. If a hostile superintelligence takes over the universe, it's not likely that humans will be recreated ever.

answer by PeterMcCluskey · 2023-05-05T15:04:20.394Z · LW(p) · GW(p)

If a hostile alien civilization notices us, we’re going to die. But if we’re going to die from the AGI anyway, who cares?

Anyone with a p(doom from AGI) < 99% should conclude that harm from this outweighs the likely benefits.

comment by RomanS · 2023-05-05T15:23:51.677Z · LW(p) · GW(p)

Not sure about it. Depends on the proportion of alien civilizations that will cause more harm than good upon a contact with us. The proportion is unknown.

A common argument is that an interstellar civilization must be sufficiently advanced in both tech and ethics. But i don't think the argument is very convincing.

1 comment

Comments sorted by top scores.

comment by gabo96 · 2023-05-06T15:50:44.264Z · LW(p) · GW(p)

Even if the alien civilization isn't benevolent, they would probably have more than enough selfish reasons to prevent a superintelligence from appearing on another planet.

So the question is whether they would be technologically advanced enough to arrive here in 5, 10, or 20 years or whatever time we have left until AGI

An advanced civilization that isn't a superintelligence itself that's advanced enough would probably have faced an AI extinction scenario and succeeded, so they would probably stand a much higher chance of aligning AI than ourselves. But previous success aligning an AI wouldn't mean future success.

Since we are certainly more stupid than said advanced alien civilization, they would either have to suppress our freedom, at least partially or find a way to make us smarter or more risk-averse.

Another question would be whether s risk from an alien civilization is worse than s risk from superintelligence.