Posts

Comments

Comment by faust on AI risk, new executive summary · 2014-04-20T05:17:21.044Z · LW · GW

As long as other humans exist in competition with other humans, there is now way to keep AI as safe AI.

As long as competitive humans exist, boxes and rules are futile.

The only way to stop hostile AI is to have no AI. Otherwise, expect hostile AI.

There really isn't a logical way around this reality.

Without competitive humans, you could box the AI, give it ONLY preventative primary goals (primarily: 1. don't lie 2. always ask before creating a new goal), and feed it limited-time secondary goals that expire upon inevitable completion. There can never be a strong AI that has continuous goals that aren't solely designed to keep the AI safe.