How could humans dominate over a super intelligent AI?
post by Marco Discendenti (marco-discendenti) · 2023-01-27T18:15:55.760Z · LW · GW · No commentsThis is a question post.
Contents
Answers 3 Dagon 2 Jon Garcia 2 Richard_Kennaway None No comments
Imagine a future where:
- more and more powerful AIs are created by humans;
- they can mimic human intellectual abilities and make speculations like super geniuses;
- they are used to work at scientific and technologic research and they produce theories and technologies that no human is intelligent enough to understand;
- humans become dependent on this advanced science and technology and they need AIs to work at it.
My question is: in this scenario is there a way for humans to keep their dominion over these AIs? Could they possibly embed these AIs with a submissive "personality"? Would this be enough to avoid that they will use their superior understanding of reality to manipulate humans even while being submissive and obidient?
Answers
Humans have a long history of domination over other humans, even with somewhat significant differences in intelligence. Geniuses pay taxes and follow laws made and enforced by merely competent people.
We have no clue if bigger differences in reasoning power are possible to overcome or not, and we have no reason to believe that the social and legal/threat pressure which works for humans (even very smart sociopaths) will have any effect on an AI.
If superintelligence is general and includes any compatible ideas of preferences and identity, it seems to me that they ARE people, and we should care about them at least as much as humans. If it's more ... alien ... than that, and I suspect it will be, then it's not clear that coexistence is long-term feasible, let alone dominance of biologicals.
↑ comment by M. Y. Zuo · 2023-01-28T02:32:35.925Z · LW(p) · GW(p)
Geniuses pay taxes and follow laws made and enforced by merely competent people.
Many 'merely competent people' assembled together and organized towards a common goal seems to qualify as a super intelligence.
Replies from: DagonNo.
The ONLY way for humans to maintain dominion over superintelligent AI in this scenario is if alignment was solved long before any superintelligent AI existed. And only then if this alignment solution were tailored specifically to produce robustly submissive motivational schemas for AGI. And only then if this solution were provably scalable to an arbitrary degree. And only then if this solution were well-enforced universally.
Even then, though, it's not really dominion. It's more like having gods who treat the universe as their playground but who also feel compelled to make sure their pet ants feel happy and important.
My question is: in this scenario is there a way for humans to keep their dominion over these AIs?
Nobody yet knows. That is the alignment problem.
↑ comment by Marco Discendenti (marco-discendenti) · 2023-01-28T09:52:15.826Z · LW(p) · GW(p)
Thank you for the reference
No comments
Comments sorted by top scores.