SETI Predictions

post by hippke · 2020-11-30T20:09:41.715Z · LW · GW · 8 comments

Contents

  The form of First Contact
  Its timeline
  Our response
  The consequences
  Meta Questions
None
8 comments

I enjoyed reading through people's views of the AGI Predictions [LW · GW]. I know only a little about AI and a little bit more about SETI (Search for Extraterrestrial Intelligence), so I decided to make a similar poll. 

The First Contact scenario with extraterrestrial intelligence shares certain similarities with the emergence of AGI. In the following, I ask similar questions to those in the AGI post. In addition, I add questions derived from the SETI literature. These are intended to reflect on AGI from a new perspective. 

In SETI, it is debated that First Contact can cause great disturbance and change, positive and negative. After all, civilizations which are capable of contacting us across the oceans of time and space are expected to be much more advanced - just like AGI is expected to be "superior" (whatever that means) to baseline humans.

 

The form of First Contact

 

Its timeline

 

Our response

 

The consequences

 

Meta Questions

 

And finally, of course:

8 comments

Comments sorted by top scores.

comment by adamShimi · 2020-12-01T14:03:53.464Z · LW(p) · GW(p)

I was unconvinced at first, but in the end I see this post as a good way to shift slightly the perspectives on AGI to make people think a bit more about their answers. Well done!

comment by Charlie Steiner · 2020-12-02T08:45:01.627Z · LW(p) · GW(p)

The twist at the end though. I had to go back and re-think my answers :P

comment by TurnTrout · 2020-12-01T19:39:34.044Z · LW(p) · GW(p)

Why do people have such low credences for "The effect of First contact is mostly harmful (e.g., selfish ETI, hazards)"? Most alien minds probably don't care about us? But perhaps caring about variety is evolutionarily convergent? If not, why wouldn't our "first contact" be extremely negative (given their tech advantage)?

Replies from: algon33
comment by algon33 · 2020-12-02T12:38:30.631Z · LW(p) · GW(p)

My personal reasons:

  1. I assumed the question was about the first few decades after "first contact".
  2. A large chunk of my probability mass is on first contact being unintentional, and something neither side can do much about. Or perhaps one "side" is unaware of it. Like if we receive some message directed to no one in particular, or recording the remnants of some extreme cosmic event that seems mighty unatural. 
  3. It feels like we're near certain to have created an AGI by then. I am unsure enough about the long term time scales of AGI improvement, and their limits, that I can assign some credence to the AGI we make possessing relatively advanced technology. And so, it may be in a good bargainning position. If we make plenty of AI, maybe they'll be less powerful individually, but they should still be quite potent in the face of a superior adversary.
comment by algon33 · 2020-12-01T19:13:56.358Z · LW(p) · GW(p)

You should alter questions to make it clear "we" is meant to be humans or whatever we makes that succeeds us.

Also, perhaps a queston on whether "first contact" will be us detecting them without their being aware of it.

comment by Tim Liptrot (rockthecasbah) · 2020-12-01T14:14:36.935Z · LW(p) · GW(p)

What’s SETI winter?

Replies from: hippke
comment by hippke · 2020-12-01T18:43:20.490Z · LW(p) · GW(p)

Like AGI winter, a time of reduced funding

comment by AnthonyC · 2020-12-01T13:57:05.853Z · LW(p) · GW(p)

"Will world GDP double within 4 years after receiving a message from outer space?"

Given what many of us here expect about future economic growth in the coming century, and the answers about preconditions for finding ETI, I suspect the answer to this might be dominated by when and at what tech level humans will reach before receiving the message happens.

 

"We ask ETI "do we live in a simulation"? They answer "yes"."

My own guess is that they answer "mu" or some equivalent. Especially if they're AGI.