P. If humans try to restrict the behavior of a superintelligence, then the superintelligence will have a reason to kill all humans.
Ah yes, the second part of Jacks' argument as I presented it was a bit hyperbolic. (Though, I feel the point stands: he seems to suggest that any attempt to restrict Super Intelligences would "create the conditions for an antagonistic relationship" and will give them a reason to harm Humans). I've updated the post with your suggestion. Thanks for the review and clarification.
Point 3) is meant to emphasize that:
he knows the risk and danger to Humans in creating Super Intelligences without fully understanding their abilities and goals, and yet
he is in favor of building them and giving them free and unfettered access to make any actions in the world that they see fit
This is, of course, an option that Humans could take. But, the question remains, would this action be likely to allow for acceptable risks to Humans and Human society? Would this action favor Human's self preservation?