Super AGI's Shortform

post by Super AGI (super-agi) · 2023-06-01T06:49:42.237Z · LW · GW · 3 comments

Contents

3 comments

3 comments

Comments sorted by top scores.

comment by Super AGI (super-agi) · 2023-06-01T06:49:43.603Z · LW(p) · GW(p)

Is this proof that only intelligent life favors self preservation?

Joseph Jacks' argument here at 50:08 is: 

1) If Humans let Super Intelligences do "whatever they want", they won't try to kill all the Humans (because, they're automatically nice?) 

2) If Humans make any (even feeble) attempts to protect themselves from Super Intelligences, then the Super Intelligences can and will will have reason to try to kill all the Humans. 

3) Human should definitely build Super Intelligences and let them do whatever they want... what could go wrong? yolo! 

 

 

Replies from: None
comment by [deleted] · 2023-06-01T11:55:56.124Z · LW(p) · GW(p)Replies from: super-agi
comment by Super AGI (super-agi) · 2023-06-01T16:47:10.886Z · LW(p) · GW(p)

P. If humans try to restrict the behavior of a superintelligence, then the superintelligence will have a reason to kill all humans.

 

Ah yes, the second part of Jacks' argument as I presented it was a bit hyperbolic.  (Though, I feel the point stands: he seems to suggest that any attempt to restrict Super Intelligences would "create the conditions for an antagonistic relationship" and will give them a reason to harm Humans). I've updated the post with your suggestion.  Thanks for the review and clarification.

 

Point 3) is meant to emphasize that:

  • he knows the risk and danger to Humans in creating Super Intelligences without fully understanding their abilities and goals, and yet 
  • he is in favor of building them and giving them free and unfettered access to make any actions in the world that they see fit

This is, of course, an option that Humans could take.  But, the question remains, would this action be likely to allow for acceptable risks to Humans and Human society?  Would this action favor Human's self preservation?