I enjoyed reading through people's views of the AGI Predictions [LW · GW]. I know only a little about AI and a little bit more about SETI (Search for Extraterrestrial Intelligence), so I decided to make a similar poll.
The First Contact scenario with extraterrestrial intelligence shares certain similarities with the emergence of AGI. In the following, I ask similar questions to those in the AGI post. In addition, I add questions derived from the SETI literature. These are intended to reflect on AGI from a new perspective.
Why do people have such low credences for "The effect of First contact is mostly harmful (e.g., selfish ETI, hazards)"? Most alien minds probably don't care about us? But perhaps caring about variety is evolutionarily convergent? If not, why wouldn't our "first contact" be extremely negative (given their tech advantage)?
I assumed the question was about the first few decades after "first contact".
A large chunk of my probability mass is on first contact being unintentional, and something neither side can do much about. Or perhaps one "side" is unaware of it. Like if we receive some message directed to no one in particular, or recording the remnants of some extreme cosmic event that seems mighty unatural.
It feels like we're near certain to have created an AGI by then. I am unsure enough about the long term time scales of AGI improvement, and their limits, that I can assign some credence to the AGI we make possessing relatively advanced technology. And so, it may be in a good bargainning position. If we make plenty of AI, maybe they'll be less powerful individually, but they should still be quite potent in the face of a superior adversary.
"Will world GDP double within 4 years after receiving a message from outer space?"
Given what many of us here expect about future economic growth in the coming century, and the answers about preconditions for finding ETI, I suspect the answer to this might be dominated by when and at what tech level humans will reach before receiving the message happens.
"We ask ETI "do we live in a simulation"? They answer "yes"."
My own guess is that they answer "mu" or some equivalent. Especially if they're AGI.