Yale Creates First Self-Aware Robot?
post by JQuinton · 2012-09-28T17:43:27.803Z · LW · GW · Legacy · 21 commentsContents
21 comments
Apparently a PhD candidate at the Social Robotics Lab at Yale created a self-aware robot:
In the mirror test, developed by Gordon Gallup in 1970, a mirror is placed in an animal’s enclosure, allowing the animal to acclimatize to it. At first, the animal will behave socially with the mirror, assuming its reflection to be another animal, but eventually most animals recognize the image to be their own reflections. After this, researchers remove the mirror, sedate the animal and place an ink dot on its frontal region, and then replace the mirror. If the animal inspects the ink dot on itself, it is said to have self-awareness, because it recognized the change in its physical appearance.
[...]
To adapt the traditional mirror test to a robot subject, computer science Ph.D. candidate Justin Hart said he would run a program that would have Nico, a robot that looks less like R2D2 and more like a jumble of wires with eyes and a smile, learn a three-dimensional model of its body and coloring. He would then change an aspect of the robot’s physical appearance and have Nico, by looking at a reflective surface, “identify where [his body] is different.”
What do Less Wrongians think? Is this "cheating" traditional concepts of self-awareness, or is self-awareness "self-awareness" regardless of the path taken to get there?
21 comments
Comments sorted by top scores.
comment by JoshuaZ · 2012-09-28T17:51:56.944Z · LW(p) · GW(p)
This is really just a robot built to pass a specific test. This isn't that different from robot programmed to say "I'm aware and am aware of my own awareness." Don't confuse a useful proxy test with genuine self-awareness.
Replies from: atucker, thomblake, Luke_A_Somers↑ comment by atucker · 2012-09-28T18:40:47.055Z · LW(p) · GW(p)
I don't at all think that this robot possesses full-blown human-style self-awareness, but, depending on the actual algorithms used in self-recognition, I think it passes the mirror test in a meaningful way.
For instance, if it learned to recognize itself in the mirror by moving around and noticing really strong correlations between it's model of it's own movements and the image it sees, then ultimately concludes that the image is a reflection of itself, then identifies visual changes about itself, then I would say that it has a self-model in a meaningful and important way. It doesn't contextualize itself in a social setting, or model itself as having emotions, but it is self-representing.
This robot has a lower amount of self-modeling capability, I would say: http://www.youtube.com/watch?v=ehno85yI-sA
It's able to update a model of itself, but not recognize itself.
Towards the end of the video, I feel a similar amount to greater amount of sympathy for the robot's "struggle" as I do for injured crustaceans. I also endorse that level of sympathy, and can't really think of a meaningful functional difference between the two that puts the crustacean intelligence as being more important.
If it just took for granted that it was looking at itself, and updated a model of itself, it has some kind of self-model, but it seems less important.
Replies from: JoshuaZ↑ comment by thomblake · 2012-09-28T19:01:52.240Z · LW(p) · GW(p)
Based on what I've seen before with Nico, I'm guessing it was able to figure out that the reflection was giving it information about itself, and then updated its self-model based on a change in the reflection.
I don't know what "genuine self-awareness" is, but this is a lot different from a robot programmed to say "I'm aware and am aware of my own awareness."
Replies from: MaoShan↑ comment by Luke_A_Somers · 2012-09-28T18:38:02.861Z · LW(p) · GW(p)
Depends on how it did it, doesn't it? More details are necessary for sure to conclude anything.
If it turns out to actually be self-aware, but isn't that bright, then the field of AI suddenly gets very interesting and scary.
comment by Richard_Kennaway · 2012-09-28T18:46:04.875Z · LW(p) · GW(p)
Business as usual, nothing to see here.
comment by thomblake · 2012-09-28T19:06:20.706Z · LW(p) · GW(p)
Nico is a really great robot. Scaz's general approach there is interesting - instead of trying to make a robot that is intelligent by whatever means, he's generally trying to use our best models of humans to make a robot work, to test those models and find out more about humans.
So for example, he would implement eye tracking by looking at visual light, rather than the much easier infra-red, because that's how humans do it.
comment by JoshuaFox · 2012-09-29T18:56:36.462Z · LW(p) · GW(p)
If recognizing oneself in a mirror is what interests you, then this robot has it.
If modeling oneself formally is what interests you, then a quined Lisp program has it.
If successfully pretending to be human, showing emotion, acting irrationally, winning at chess, is what interest you, then some of today's AIs have it.
First define what you are looking for in this context -- then check if an AI has it.
comment by TimS · 2012-09-28T18:18:34.519Z · LW(p) · GW(p)
Why do we think that recognizing oneself in the mirror is strong evidence of self-awareness?
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-09-28T18:28:40.093Z · LW(p) · GW(p)
Because it correlates with intelligence and seems indicative of deeper trends in animal neurology. Probably not a signpost that carries over to arbitrary robots, though.
Replies from: None↑ comment by [deleted] · 2012-09-28T19:34:22.975Z · LW(p) · GW(p)
Because it correlates with intelligence
The problem with that is that, for any being that can't clearly and unambiguously report its experiences of mirror self-recognition to us (nonhuman animals generally -- there are claims of language use, but those would be considered controversial, to put it mildly, if used as evidence here) we have to guess whether or not the animal recognized itself based on its behaviors and their apparent relevance to the matter. It's necessarily an act of interpretation. Humans frequently mistake other humans for simpler, less-reflective beings than is actually the case due to differences of habit, perception and behavior -- simply because they don't react in expected ways based on the same stimulus.
Human children have been subjected to the mirror test and passed or failed based on whether they tried to remove a sticker from their own faces. It should not be difficult to list alternative reasons for why a child wouldn't remove a sticker from their face. I find myself wondering if these researchers remember ever having been children before...
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-09-28T21:14:19.119Z · LW(p) · GW(p)
Sure, there's some ambiguity there, but over adequately large sample sizes, trends become evident. Peer reviewed research is usually pretty good at correcting for confounds that people reading about it think up in the first fifteen minutes.
Replies from: None↑ comment by [deleted] · 2012-09-28T21:26:05.188Z · LW(p) · GW(p)
Sure, there's some ambiguity there, but over adequately large sample sizes, trends become evident.
That is a general defense of the concept of statistical analysis. It doesn't have anything to do with my point.
Peer reviewed research is usually pretty good at correcting for confounds that people reading about it think up in the first fifteen minutes.
It's pretty damn slow about correcting for pervasive biases in the researcher population, though. There's a reason we talk about science advancing funeral-by-funeral.
comment by [deleted] · 2012-09-29T00:48:21.195Z · LW(p) · GW(p)
Nico is self-aware to the degree that a man born blind is not self-aware, according to this test.
comment by DanielLC · 2012-09-29T02:14:39.051Z · LW(p) · GW(p)
Self awareness has been defined as being aware of your own awareness. This test shows whether or not the subject is aware of the correlation between its body and the image in the mirror. It's not entirely clear what "awareness" is, but it's clearly in no way related to the correlation between your body and the image in a mirror. You'd have to be aware of your physical self to do it, but not your mental self.
It's a puzzle. It measures intelligence just like any other puzzle. Calling what it measures "self-awareness" is little more than a pun.
comment by Luke_A_Somers · 2012-10-01T13:41:46.716Z · LW(p) · GW(p)
It occurs to me, one of the best possible things that can happen here is if the first self-aware AI is in a robot, and not too too smart. Why?
We would expect that such a robot be social, in ways that we wouldn't demand of a server rack. This would more readily expose any unfriendly elements of its programming (and unless the problem is a whole lot easier than it seems, there will be some).
So far, robots have been given the benefit of the doubt because they're obviously complicated appliances. Once that no longer applies - once it's past 'Wow, you're really good at imitating a person' and into 'Do I like you' territory - then we will naturally begin applying different standards to them.
On the other hand, it could be that friendliness sufficient for such limited AIs does nothing for a superintelligence. Even so, I think that this would raise the profile of the problem, give it more mindshare, and generally help.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-10-01T14:39:00.797Z · LW(p) · GW(p)
There is something entertainingly ironic about this sentiment being expressed on an online forum.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-10-01T15:08:55.296Z · LW(p) · GW(p)
It's a lot less easy to hide that you're a dog when you're not on the internet.
comment by Mitchell_Porter · 2012-09-29T07:18:33.351Z · LW(p) · GW(p)
I don't believe it's self-awareness because I think awareness, whether raw or reflective, is only a property of certain special entities, like the infamous quantum monads.
However, even from a functionalist perspective, you shouldn't jump to the conclusion that this is self-awareness. The mirror test, like the Turing test, is a behavioral test, but self-awareness isn't a behavior, it's a cognitive capacity, and you really need to examine the detailed semantics and cause-and-effect of how the program produced the behavior, to see whether "self-awareness" had anything to do with it.
comment by falenas108 · 2012-09-29T06:27:30.352Z · LW(p) · GW(p)
I think the self-awareness test is considered important because it's a big step in child development, but that doesn't mean it's very important in terms of intelligence. It doesn't seem to complicated to design a machine that can recognize itself, so this doesn't seem like much of an achievement.