Posts

Comments

Comment by gabo96 on Why not use active SETI to prevent AI Doom? · 2023-05-06T15:50:44.264Z · LW · GW

Even if the alien civilization isn't benevolent, they would probably have more than enough selfish reasons to prevent a superintelligence from appearing on another planet.

So the question is whether they would be technologically advanced enough to arrive here in 5, 10, or 20 years or whatever time we have left until AGI

An advanced civilization that isn't a superintelligence itself that's advanced enough would probably have faced an AI extinction scenario and succeeded, so they would probably stand a much higher chance of aligning AI than ourselves. But previous success aligning an AI wouldn't mean future success.

Since we are certainly more stupid than said advanced alien civilization, they would either have to suppress our freedom, at least partially or find a way to make us smarter or more risk-averse.

Another question would be whether s risk from an alien civilization is worse than s risk from superintelligence.

Comment by gabo96 on [deleted post] 2023-05-06T15:07:40.754Z

I'd like to add another question: 

Why aren't we more concerned about s-risk than x-risk? 

Given that virtually everyone would prefer dying rather than facing an indefinite amount of suffering for an indefinite amount of time, I don't understand why more people are asking this question.