Call for Contributions: Implied Marginal Value of Time from Good/Services

2019-03-11T15:59:04.074Z · score: 7 (6 votes)
Comment by skinnersboxy on Debate AI and the Decision to Release an AI · 2019-01-21T14:52:48.486Z · score: 2 (2 votes) · LW · GW

B having enough natural language speech, AI architecture analysis, and human psychology skills to make good arguments to humans is probably AGI complete, and thus if B's goal is to prevent A from being released, it might decide that convincing us to do it is less effective than breaking out on its own, taking over the world, and then systematically destroying any hardware A could exist on, just to be increasingly confident that no instance of A exists ever again. Basically, this scheme assumes on multiple levels that you have a boxing strategy strong enough to contain an AGI. I'm not against boxing as an additional precaution, but I am skeptical of any scheme that requires strong boxing strategies to start with.