MIT is working on industrial robots that (attempt to) learn what humans want from them [link]

post by Dr_Manhattan · 2012-06-13T15:42:58.015Z · LW · GW · Legacy · 3 comments

http://web.mit.edu/newsoffice/2012/robot-manufacturing-0612.html

"Massachusetts Institute of Technology (MIT) researchers have developed an algorithm that enables a robot to quickly learn an individual's preference for a certain task and adapt accordingly to help complete the task."

It would be interesting to see what kind of issues they will run into. (Granted this is a very restricted environment)

 

 

3 comments

Comments sorted by top scores.

comment by Vaniver · 2012-06-13T16:28:13.932Z · LW(p) · GW(p)

I would prefer far narrower titles for developments like this. There's a three-step industrial task that's done on multiple objects near each other. Some people prefer to do the first step for all the objects, then the second step, then the third- other people prefer to do all three steps on one object, then move to the next. This is a robot designed to learn from subtle cues (like the person not hammering a bolt after they place it, because the person wants them to place a different bolt) which of the two strategies the person wants to do. (There may actually be more strategies, but those two seem like the dominant ones.)

It's more classifying the workers- and responding appropriately- than it is about 'wants,' even though the classification is want-based.

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2012-06-13T18:41:28.840Z · LW(p) · GW(p)

Yeah, as usual journalism generalizes claims for both sensationalism and comprehensibility purposes. I tried to downplay it a bit with choice of words, but still ended up channeling the original writing.

The thing that I find interesting is that these semi-autonomous systems might run into issues of defining utility (this is particularly true for systems with some level of danger, autonomous cars/drones). It might be an area where people will start feeling a need for formalization that can lead to some academics getting into FAI territory (which is good I think).

Replies from: Vaniver
comment by Vaniver · 2012-06-13T19:51:56.954Z · LW(p) · GW(p)

Indeed; it's definitely on topic and interesting work, and I expect that simple people-reading models like this will do tremendous amounts of good and make significant progress in parallel to the first-principles work that SI appears to be doing.