Self awareness - why is it discussed as so profound?

post by Dmytry · 2012-02-22T13:58:28.124Z · LW · GW · Legacy · 21 comments

Something I find rather odd - why is self-awareness usually discussed as something profoundly mysterious and advanced?

People would generally agree that a dog can be aware of food in the bowl, if the dog has seen or smelled it, or can be unaware of a food bowl otherwise. One would think that a dog can be aware of itself in so much as dog can be aware of anything else in the world, like food in the bowl. There isn't great deal of argument about dog's awareness of food.

Yet the question whenever dog has 'self awareness' quickly turns into debate of opinions and language and shifting definitions of what 'self awareness' is, and irrelevancies such as the question whenever the dog is smart enough to figure out how mirror works well enough to identify a paint blotch on itself1 , or the requests that it be shown beyond all doubt that dog's mind is aware of dog's own mind, which is something that you can deny other humans just as successfully.

I find it rather puzzling.

My first theory is to assume that it is just a case of avoiding the thought due to it's consequences vs the status quo. The status quo is that we, without giving it much thought, decided that self awareness is uniquely human quality, and then carelessly made our morality sound more universal by saying that the self aware entities are entitled to the rights. At same time we don't care too much about other animals.

At this point, having well 'established' notions in our head - which weren't quite rationally established but just sort of happened over the time - we don't so much try to actually think or argue about self awareness as try to define the self awareness so that humans are self aware, and dogs aren't yet the definition sounds general - or try to fight such definitions - depending to our feeling towards dogs.

I think it is a case of general problem with reasoning. When there's established status quo - which has sort of evolved historically - we can have real trouble thinking about it, rather than try to make up some new definitions which sound as if they existed from the start and the status quo was justified by those definitions.

This gets problematic when we have to think about self awareness for other purposes, such as AI.

1: I don't see how the mirror self-recognition test implies anything about self awareness. You pick an animal that grooms itself, you see if that animal can groom itself using the mirror. That can work even if the animal only identifies what it wants to groom, with what it sees in the mirror, without identifying either with self (whatever that means). Or that can fail, if the animal doesn't have good enough pattern matching to match those items, even if the animal identifies what it grooms with self and has a concept of self.

Furthermore the animal that just wants to groom some object which is constantly nearby and grooming of which feels good, could, if capable of language, invent a name for this object - "foobar" - and then when making dictionary we'd not think twice about translating "foobar" as self.

edit: Also, i'd say, self recognition complicates our model of the mirrors, in the "why mirror swaps left and right rather than up and down?" way. If you look at the room in the mirror, obviously mirror swaps front and back. Clear as day. If you look at 'self' in the mirror, there's this self standing here facing you, and it's left side is swapped with it's right side. And the usual model of mirror is rotation of 180 degrees around vertical axis, not horizontal axis, followed by swapping of left and right but not up and down. You have more complicated, more confusing model of mirror, likely because you recognized the bilaterally symmetric yourself in it.

21 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2012-02-23T12:39:13.912Z · LW(p) · GW(p)

why is self-awareness usually discussed as something profoundly mysterious and advanced?

Because "self-awareness" is sometimes used to mean "consciousness", which is indeed mysterious (nobody knows what it is -- if they did, less would be written about the question of what it is) and advanced (nobody knows what an explanation would even look like).

And "self-awareness" is also used to mean "having any sort of model of oneself", which many simple machines have -- a fairly trivial sort of thing. If one does not notice that the same word is being used to mean two different things, one mysterious and one mundane, the resulting confusion can be mistaken for even greater profundity, mysteriousness, and advanced thinking.

comment by Douglas_Reay · 2012-02-23T11:56:54.831Z · LW(p) · GW(p)

why is self-awareness usually discussed as something profoundly mysterious and advanced?

There are interesting questions connected with conscious self-awareness. Specifically, whether our conscious experience is (and directs) the thought process, or whether it is a shadow that lags most actual decision making. There's an interesting experiment with split-brained patients where one half of the brain can see a glass of water and reach a hand out to it, but the other half is unaware of this and makes up a reason on the fly as to why it carried out that action.

Have you read the essay on self-awareness by V S Ramachandran ?

comment by RomeoStevens · 2012-02-22T23:28:40.155Z · LW(p) · GW(p)

I'm unsure as to how my internal experience is anything other than just one more sensory experience. Although I haven't thought sufficiently carefully about it yet to have high confidence.

comment by shminux · 2012-02-22T20:50:18.448Z · LW(p) · GW(p)

People should instead be asking "why we think that we have self-awareness?"

comment by timtyler · 2012-02-22T19:10:46.589Z · LW(p) · GW(p)

why is self-awareness usually discussed as something profoundly mysterious and advanced?

One factor is probably that evolution built us to believe that we are the most wonderful and precious thing ever. We also appear to be built to believe that we are our egos. The combination of these factors apparently leads to some of the issues that you mention.

comment by Thomas · 2012-02-22T15:19:04.117Z · LW(p) · GW(p)

Someone who is NOT self aware, would find difficult to understand what the hell some people are talking about.

I am not saying you are zombie. But this selfawareness seems pretty special to me.

Replies from: Viliam_Bur, Dmytry
comment by Viliam_Bur · 2012-02-22T16:45:29.994Z · LW(p) · GW(p)

Someone who is NOT self aware, would find difficult to understand what the hell some people are talking about.

Finally we have a test for p-zombies. :D

To prove that I am not a p-zombie -- self-awareness is having a model of myself, as opposed to just having a model of the environment. Not having self-awareness would be like watching a movie, thinking only about the things on the screen.

Of course an organism needs to interact with the world, but this interaction does not have to include a self-model. Simple movements need only reflexes. Walking in some direction could be modelled as moving the environment. Properties of self, such as hunger, could be modelled as global properties. Relations with other objects or organisms can be modelled as properties of those objects or organisms. ("Hunger exists. There is an apple. Closer. Closer. Closer. Eating. Apple is good. No hunger.")

Having a model of myself is useful for getting information about myself. For example I could watch animals walking on ice; under large animals the ice breaks, under small animals the ice does not break. If I can model myself as an animal of given weight, I can predict whether the ice will break under me.

When I model myself as a human, I can use my knowledge about other people to gain knowledge about me. By watching what other people do and what happens then, I can predict what is good and bad to do, without trying first.

My thoughts and emotion can be modelled as external facts, until I notice that other people don't share them. By analogy I can deduce that others have ones too; an incorrect analogy will make me also believe in thoughts and emotions of stones, trees, clouds etc.

Now back to the dog's self-awareness... Since we don't need the self-awareness all the time, it would be better to ask if a dog can be self-aware, and under what circumstances. It would be even better to taboo the word, and ask instead what kinds of models does dog have of themselves, and how can we experimentally prove it. (A model should allow knowing some things without trying them.)

I think self-awareness is not a "yes or no" question; it seems possible to model one's own weight without modelling other aspects. Maybe after modelling enough partial aspects a pattern appears, and one starts to think what else could they model about oneself. But it does not mean that one automatically discovers everything. An attention must focus on the aspect; for example when people learn dancing, they become aware of some aspects of themselves they were not aware before. This means that self-awareness can be increased for a human... and possibly also for a dog. Even if dogs usually don't develop some kinds of self-models, they could be able to develop them in some circumstances.

There are probably some kinds of self-awareness that humans don't possess, but other intelligences could -- for example being aware of one's own algorithm. Some insights can be gained by meditation, by training social skills or rationality; insights about body can be gained by learning medicine, physical therapy, etc. What we usually call self-awareness is a baseline level of self-modelling that an average human in usual conditions is be able to develop. It could be interesting to analyze possible aspects of self-modelling and make a map of self-awareness.

Replies from: Dmytry
comment by Dmytry · 2012-02-22T18:27:00.132Z · LW(p) · GW(p)

That's one good definition.

The thing is that there's nothing complicated or mysterious what so ever about having a self model. If I were to write autopilot, I would include flight simulator inside, to test the autopilot's outputs and ensure that they don't kill the passenger (me) *. I could go fancy and include the autopilot in simulation itself, as to ensure that autopilot does not put airplane into situation when autopilot can't evade a collision.

Presto, a self-aware airplane, which is about as smart as a brain damaged fruit fly. It's even aware of the autopilot inside the airplane.

If I were to write the chess AI, the chess AI is recursive and it tries a move and then 'thinks': what would it do in the next situation? Using the self as a self model.

Speaking of dogs, the boston dynamics BigDog robot, from what i know, includes model of it's own physics. It is about as smart as a severely brain damaged cockroach.

So you end up with a lot of non-living things being self aware. Constantly self aware, whereas a case can be made that humans aren't constantly self aware. The non-living things that are dumber than a cockroach being self aware.

edit: one could shift goalposts and require that the animal be capable of developing a self model; well, you can teach dog to balance on a rope, and balancing on a rope pretty much requires some form of model of body's physics. You can also make a pretty stupid (dumber than cockroach) AI in a robot that would make a self model, not only of robot body but of the AI itself.

[ I never worked writing autopilots and from what I gather autopilots don't generally include this on runtime but are tested on a simulator during development. I value my survival and don't have grossly inflated view of my coding abilities, so I'd add that simulator, and make a real loud alarm to wake the pilot if anything goes wrong. An example of me using self model to improve my survival. From what I can see other programmers that I know, many in beginning have inflated view of their coding abilities which keeps biting them in the backside all day long, until they get it, perhaps becoming more self aware ].

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-02-22T21:45:20.683Z · LW(p) · GW(p)

So you end up with a lot of non-living things being self aware.

In this sense, self-awareness is easy, the question is awareness of what exactly, and how it is used.

Awareness of one's body position is less interesting, it can be only used for movement. For a biological social species awareness of one's behavior and mind probably leads to improved algorithms -- perhaps is it necessary for some kind of learning.

I am not sure what benefits would self-awareness bring to a machine... and maybe it depends on its construction and algorithm. For example when a machine has task to compute something, a non-self-aware machine would just compute it, but a self-aware machine might realize that with more memory and faster CPU it could do the calculations better.

Replies from: Dmytry
comment by Dmytry · 2012-02-22T22:20:37.197Z · LW(p) · GW(p)

Yea. Well, here you enter realm of general intelligence - the general intelligence would just look at the world, see itself, and figure things out including the presence of self and such.

I'm not convinced that it's how it usually works for h. sapiens. I don't believe that we are self aware as function of general intelligence, there's why: we tend to have serious discussion of things like a philosophical zombie. The philosophical zombie is a failure to recognize the physical item that is self as self. I seriously think we're just hardcoded self aware - we perceive some of our thought processes in similar way to how we perceive external world. This confuses the hell out of people, to the point that they fail to recognize themselves in a physical system (hence p-zombies).

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-02-23T09:02:39.623Z · LW(p) · GW(p)

Some details about how exactly it works for homo sapiens could be found in works of Vygotsky and Piaget -- they did some cool experiments about what kind of reasoning is human child generally able at what age. Some models need time and experience to develop, though maybe we have some hardware support that makes it click faster. For example at some age children start to understand the conservation of momentum (when an interesting object disappears behind a barrier, they no longer look at the point where it disappeared, but at the opposite side of the barrier, where it should appear). At some age children start to understand that their knowledge is different from other people's knowledge (a child is shown some structure from both sides, another person only from one side, and a child has to say which parts of structure did the other person see). So our models develop gradually.

Modelling thinking is difficult, because we cannot directly observe the thoughts of others, and the act of observing interferes with what is being observed. There are techniques that help. It is difficult to recognize oneself as a physical system, when one doesn't know how exactly does the system work. If I wouldn't have any information about how brain works, what reason would I have to believe that my mind is a fuction of my brain? My muscles are moving and I can see their shapes under my skin, but I never observe a brain in action. In a similar way, by observing a robot you would understand the wheels and motors, but not the software and the non-moving parts of hardware; even if you were that robot.

comment by Dmytry · 2012-02-22T15:46:55.535Z · LW(p) · GW(p)

Ya, I thought about it. To me the special thing about self awareness is that I feel my mind internally in a way that I can not feel any of the other objects - yet I see it as just having some sort of loop inside that turns some of the internal data into qualia, an implementation detail. Without this I'd need to talk out loud for "I think therefore I am", or infer existence of self from observing myself move my hands with my own eyes; that may be inconvenient and could impede the realization that there exist 'self', or require more thinking to determine that self exist, and could result in reduced reproductive fitness (self preservation could fail). But it would not exclude experiencing of the qualia (it could of made it hard to talk about the qualia).

My understanding is that p-zombies are devoid of qualia, but not of self-knowledge.

I have other loops, even those that most people lack - if i close my eyes i still see outline of my hands (without seeing colours or any other properties - just the outline), that's a form of synaesthesia. I can't turn it off any more than colour-graphene synaesthete can turn off his 'syntax highlighting'.

Replies from: David_Gerard
comment by David_Gerard · 2012-02-22T19:07:18.206Z · LW(p) · GW(p)

This suggests the special thing about humans is (a) that they model other humans and (b) that model includes assuming the other person has an awareness of self; and that animals aren't modeled as humans so we don't start with the assumption.

Replies from: Dmytry
comment by Dmytry · 2012-02-22T19:12:37.913Z · LW(p) · GW(p)

I dunno, I can make game AI for in-game airplane that will model itself, the targets, the target AIs, and the target AI's self modelling, and the target AI's modelling of the attacker AI, and so on. And the AI will still be about as smart as a brain damaged fruit fly.

[ I can't really prove that fruit flies, or parrots, or crows, or dogs, or apes, or other humans, do this kind of thing, but i have no reason whatsoever to presume that they can't do that, if an AI that i write for a computer game can do it with quite little computing power]

The chess AI, for one thing, does just that, in it's most pure form. When testing a potential move, it makes a move (in the memory), and then invokes other side's model (adapted self model) to predict the other side's move in that situation, and that invokes it's self model to predict it's own move, and so on.

With the animals... The cat on a chair next to me is cleaning herself. Cats like to be clean. etc. etc. I think we start with this assumption as children, and then we start wanting to be special / get taught we're so special and shouldn't care about animals / have to kill animals for food, and then we start redefining stuff in very odd ways so that humans be self aware, and nothing else would. And in result end up with self awareness being undefined coz the logical world is not very convenient and we can't come up with any definition of self awareness so that some fairly stupid systems wouldn't be self aware. That may well be also why consciousness and the like is so ill defined.

edit:

Perhaps we want a concise definition that fits specific purpose - humans are X [and perhaps other very big brained animals which we don't need to be killing] but nothing else is X, we want to express this without appearing to make humans special. But to our dismay, such definition simply doesn't exist. And we either end up with nonsense that allows for a world where everyone else is a p-zombie - the humans may not have X - or we end up with some good definition which unfortunately allows even very simple systems to be X, and then nobody wants to accept this definition.

Replies from: David_Gerard, Nornagest
comment by David_Gerard · 2012-02-22T23:37:25.965Z · LW(p) · GW(p)

This suggests that the special thing about humans is that they trigger each others' human-detectors, and all else is rationalisation.

Replies from: Incorrect
comment by Incorrect · 2012-02-23T00:13:30.406Z · LW(p) · GW(p)

That doesn't mean we have to care.

comment by Nornagest · 2012-02-24T00:55:25.400Z · LW(p) · GW(p)

The chess AI, for one thing, does just that, in it's most pure form. When testing a potential move, it makes a move (in the memory), and then invokes other side's model (adapted self model) to predict the other side's move in that situation, and that invokes it's self model to predict it's own move, and so on.

This doesn't seem quite right. A relevant analogy to "self" in a chess AI wouldn't be the black side, or the current configuration of the board from Black's perspective, or even the tree of Black's projected best moves; that's all state, more analogous to bodies or worlds than selves. A better analogy to "self" would be the inferred characteristics of the algorithm running Black: how aggressive it is, whether it goes for positional play or captures, whether it really likes the Four Knights Game, and so forth.

Some chess AI does track that sort of thing. But it's not remotely within the scope of classical chess AI as I understand it (bearing in mind that my only serious exposure to chess AI took a decidedly nonclassical approach), and I've never heard of a chess AI that tracked it recursively, factoring in inferred changes in the opponent's behavior if the AI's own parameters are tweaked. It'd be possible, of course, but very computationally intensive for all but the simplest games, and probably fairly low-fidelity given that computers don't play much like humans in most of the games I'm aware of.

Replies from: Dmytry
comment by Dmytry · 2012-02-24T01:20:36.506Z · LW(p) · GW(p)

Why the algorithm itself doesn't count as the self? The algorithm has a self model: the self computing for 2 ply less deep. And the enemy model, based on self computing for 1 ply less and for the other side. (The black may play more conservatively).

It is a little bit egocentric, yes. Not entirely so though; the opponent is playing other colour.

Also, people don't do this recursive modelling of the mood of the opponent due to lack of data. You can't infer 1000 bits of information from 10 bits.

edit: this should be contrasted with typical game AIs that just makes the bot run around and shoot at people, without any self model, it knows you're there, it walks there, it shoots. That is the typical concept of zombie.

Replies from: Nornagest
comment by Nornagest · 2012-02-24T01:39:57.022Z · LW(p) · GW(p)

I'm hesitant to call those models of any kind; they don't include any kind of abstraction, either of the program's internal state or of inferred enemy state. It's just running the same algorithm on different initial conditions; granted, this is muddled a little because classical chess AI doesn't have much internal state to speak of, just the state of the board and a tree of possible moves from there. Two copies of the same chess algorithm running against each other might be said to have a (uniquely perfect) model of their enemies, but that's more or less accidental.

I'd have to disagree about humans not doing other-modeling, though. As best I can tell we evaluate our actions relative to others primarily based on how we believe those actions affect their disposition toward us, and then infer people's actions and their effects on us from there. Few people take it much farther than that, but two or sometimes three levels of recursion is more than enough for this sort of modeling to be meaningful.

Replies from: Dmytry
comment by Dmytry · 2012-02-24T01:52:46.168Z · LW(p) · GW(p)

Actually they don't have perfect models, the model does fewer moves ahead.

With regards to what people are doing, i mean, we don't play chess like this. Yes, we model other people's state, but quite badly. The people who overthink it fail horribly at social interaction.

With chess, you could blank out the lines 1 to 3 and 6 to 8 for first 10 moves, or the like, then you got some private states for AIs to model edit: or implement fog of war, pieces only see what they are attacking. Doesn't make any fundamental differences here. Except now there's some private state, and the things enemy knows and things enemy doesn't know, and the assumptions enemy makes about where your pieces are, etc. (the private things are on the board, but so our private thoughts are inside our non-transparent skulls, on the board of universe)

The issue as i said earlier is that we have internal definition what self awareness is - something that humans all have, smart animals maybe have, and simple AIs can't have, and then we try making some external definition that'd work like this without mentioning humans, except the world is not so convenient and what ever definition you make there's simple AI that does it.

Replies from: Nornagest
comment by Nornagest · 2012-02-24T02:28:49.580Z · LW(p) · GW(p)

Yeah, that's an acceptable way to give a chess AI internal state (or you could just use some parameters for its style of play, like I was discussing a few posts up). I'd call a chess AI that tracked its own state and made inferences about its opponent's knowledge of it self-aware (albeit with a very simple self in a very simple set of rules), but I suspect you'd find this quite difficult to handle well in practice. Fog of war is almost universally ignored by AI in strategy games that implement it, for example.

Self-awareness isn't magical, and it probably isn't enough to solve the problem of consciousness, but I don't think it's as basic a concept as you're implying either.