Norbert Wiener's paper "Some Moral and Technical Consequences of Automation"

post by JonahS (JonahSinick) · 2013-07-21T01:01:16.689Z · LW · GW · Legacy · 2 comments

Contents

2 comments

In 1960, mathematician Norbert Wiener wrote an article titled Some Moral and Technical Consequences of Automation. I'm struck by the strong overlap between certain passages from the paper and some of the themes that have been discussed on Less Wrong in connection with AI risk.

Overlapping with Eliezer's blog post The Hidden Complexity of Wishes and Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures:

Here, however, if the rules for victory in a war game do not correspond to what we actually wish for our country, it is more than likely that such a machine may produce a policy which would win a nominal victory on points at the cost of every interest we have at heart, even that of national survival.

[...]

We all know the fable of the sorcerer's apprentice, in which the boy makes the broom carry water in his master's absence, so that it is on the point of drowning him when his master reappears. [...] Disastrous results are to be expected not merely in the world of fairy tales but in the real world wherever two agencies essentially foreign to each other are coupled in the attempt to achieve a common purpose. If the communication between these two agencies as to the nature of this purpose is incomplete, it must only be expected that the results of this cooperation will be unsatisfactory. If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.

Overlapping with discussion as to whether people will build at Tool AI rather than an Agent AI:

We wish a slave to be intelligent, to be able to assist us in the carrying out of our tasks. However, we also wish him to be subservient. Complete subservience and complete intelligence do not go together.

[...]

It may be seen that the result of a programming technique of automatization is to remove from the mind of the designer and operator an effective understanding of many of the stages by which the machine comes to its conclusions and of what the real-tactical intentions of many of its operations may be. This is highly relevant to the problem of our being able to foresee undesired consequences outside the frame of the strategy of the game while the machine is still in action and while intervention on our part may prevent the occurrence of these consequences.

 

2 comments

Comments sorted by top scores.

comment by AnatoliP · 2013-07-21T04:31:28.379Z · LW(p) · GW(p)

Very interesting.

It always amazes me how insightful scientists sometimes are, even more so when you consider the technological capabilities of their time.

To put it another way: its amazing how little we have progressed on the fundamental issues despite the exponential growth in computing power.