[Link] The Myth of the Three Laws of Robotics

post by Morendil · 2011-05-10T17:44:38.597Z · LW · GW · Legacy · 3 comments

Contents

3 comments

At SingularityHub. Promising title; disappointing content. Author proceeds by pure perceptual analogy with the Asimovian Three Laws alluded to; argues that the mere possibility of self-modification renders AI uncontrollable - without considering the possibility of fixed points in the goal computation. ("Do you really think it can be constrained?" - i.e. argument from limited imagination.)

3 comments

Comments sorted by top scores.

comment by Manfred · 2011-05-10T22:12:03.475Z · LW(p) · GW(p)

Well gee, thanks for sending me to something disappointing :P

comment by timtyler · 2011-05-10T22:19:12.784Z · LW(p) · GW(p)

The article says:

Apologies to Hanson, Breazeal, Yudkowsky and SIAI for paraphrasing their complex philosophies so succinctly, but to my point: these people are essentially saying intelligent machines can be okay as long as the machines like us. Isn’t that the Three Laws of Robotics under a new name? Whether it’s slave-like obedience or child-like concern for their parents, we’re putting our hopes on the belief that intelligent machines can be designed such that they won’t end humanity. That’s a nice dream, but I just don’t see it as a guarantee.

I don't think anyone is presenting any guarantees at this stage.

comment by shokwave · 2011-05-12T18:13:50.837Z · LW(p) · GW(p)

We cannot control intelligence – it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities.

A shout out for all the human intelligences in the audience who don't think they can be controlled! Applause lights, unfortunately false. Human intelligence can be controlled incredibly effectively: education, morality, patriotism, religion, employment, "eld science", corporations, drugs, psychological conditioning...