Robot ethics [link]

post by fortyeridania · 2012-06-01T15:43:29.295Z · LW · GW · Legacy · 2 comments

The Economist has a new article on ethical dilemmas faced by machine designers.

Evidently:

1. In the event of an immoral decision by a machine, neural networks make it too hard to know who is at fault--the programmer, the operator, the manufacturer, or the designer. Thus, neural networks might be a bad idea.

2. Robots' ethical systems ought to resonate with "most people."

3. Proper robot consciences are more likely to arise given greater collaboration among engineers, ethicists, policymakers, and lawyers. Key quotation:

Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to reassure society that they are not taking any ethical short-cuts.

The second clause of the above sentence is quite similar to something Yudkowsky wrote, perhaps more than once, about the value of approaching ethics from an AI standpoint. I do not recall where he wrote it, nor did my search turn up the appropriate post.

2 comments

Comments sorted by top scores.

comment by Manfred · 2012-06-01T20:58:44.662Z · LW(p) · GW(p)

Hm. I would prefer that quote to look more like

Both ethicists and engineers stand to benefit from working together: ethicists may gain a greater understanding of their field by trying to teach ethics to machines, and engineers need to implement acceptable ethics without trial and error.

My meaning is that it seems awkward if the engineers are doing something to "reassure society" - they should be doing it to get things right.

comment by timtyler · 2012-06-02T11:53:31.922Z · LW(p) · GW(p)

First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident. In order to allocate responsibility, autonomous systems must keep detailed logs so that they can explain the reasoning behind their decisions when necessary. This has implications for system design: it may, for instance, rule out the use of artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.

I think this is nonsense. We have a current legal system without "detailed logs". Humans can still attribute blame without them. A need for logs doesn't rule out the use of neural networks.