How could humans dominate over a super intelligent AI?

post by Marco Discendenti (marco-discendenti) · 2023-01-27T18:15:55.760Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    3 Dagon
    2 Jon Garcia
    2 Richard_Kennaway
None
No comments

Imagine a future where: 

My question is: in this scenario is there a way for humans to keep their dominion over these AIs? Could they possibly embed these AIs with a submissive "personality"? Would this be enough to avoid that they will use their superior understanding of reality to manipulate humans even while being submissive and obidient?

Answers

answer by Dagon · 2023-01-27T18:53:32.729Z · LW(p) · GW(p)

Humans have a long history of domination over other humans, even with somewhat significant differences in intelligence.  Geniuses pay taxes and follow laws made and enforced by merely competent people.  

We have no clue if bigger differences in reasoning power are possible to overcome or not, and we have no reason to believe that the social and legal/threat pressure which works for humans (even very smart sociopaths) will have any effect on an AI.

If superintelligence is general and includes any compatible ideas of preferences and identity, it seems to me that they ARE people, and we should care about them at least as much as humans.  If it's more ... alien ... than that, and I suspect it will be, then it's not clear that coexistence is long-term feasible, let alone dominance of biologicals.

comment by M. Y. Zuo · 2023-01-28T02:32:35.925Z · LW(p) · GW(p)

Geniuses pay taxes and follow laws made and enforced by merely competent people.  

Many 'merely competent people' assembled together and organized towards a common goal seems to qualify as a super intelligence. 

Replies from: Dagon
comment by Dagon · 2023-01-28T16:39:46.274Z · LW(p) · GW(p)

I kind of agree.  Is that worthy of a top-level answer to this question?  

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-01-28T19:00:30.996Z · LW(p) · GW(p)

What do you mean by 'worthy'?

Replies from: Dagon
comment by Dagon · 2023-01-28T19:47:27.241Z · LW(p) · GW(p)

I mean, should you expand your model of human groups being superintelligent, and apply that to the question of how humans can dominate an AI superintelligence?  

answer by Jon Garcia · 2023-01-28T00:15:05.281Z · LW(p) · GW(p)

No.

The ONLY way for humans to maintain dominion over superintelligent AI in this scenario is if alignment was solved long before any superintelligent AI existed. And only then if this alignment solution were tailored specifically to produce robustly submissive motivational schemas for AGI. And only then if this solution were provably scalable to an arbitrary degree. And only then if this solution were well-enforced universally.

Even then, though, it's not really dominion. It's more like having gods who treat the universe as their playground but who also feel compelled to make sure their pet ants feel happy and important.

answer by Richard_Kennaway · 2023-01-28T00:14:27.919Z · LW(p) · GW(p)

My question is: in this scenario is there a way for humans to keep their dominion over these AIs?

Nobody yet knows. That is the alignment problem.

comment by Marco Discendenti (marco-discendenti) · 2023-01-28T09:52:15.826Z · LW(p) · GW(p)

Thank you for the reference

No comments

Comments sorted by top scores.