The "semiosis reply" to the Chinese Room Argument
post by Valerio · 2018-08-16T19:23:28.019Z · LW · GW · 13 commentsThis is a link post for http://www.valeriotargon.org/sca/4-the-semiosis-reply-to-the-chinese-room-argument
Contents
Reflection action 1 Reflection action 2 Iteration 1 semiotic algorithm Reflection action 3 Iteration 2 semiotic algorithm Iteration 3 semiotic algorithm Reflection action 4 Reflection action 5 Reflection action 6 Reflection action 7 From “I said” to “I write” Bibliography None 13 comments
Nobody proposed so far the following solution to the Chinese Room Argument against the claim that a program can be constitutive of understanding (a human, non-Chinese-speaker, cannot understand Chinese just having run a given program, even if this program enables the human to have input/output interactions in Chinese).
My reply goes as follows: a program, to be run by a human, non-Chinese-speaker, may indeed teach the human Chinese. Humans learn Chinese all the time; yet it is uncommon having them learning Chinese by running a program. Even if we are not aware of such a program (no existing program satisfies said requirement), we cannot a priori exclude its existence.
Before enunciating my reply, let me first steelman the Chinese Room Argument. If the human in the mental experiment of the Chinese Room is Searle, he may not know Chinese, but he may now a lot of things about Chinese: that it has ideograms and punctuation, which he may recognize; that it is a human language, which has a grammar; that it has the same expressive power of a language he knows, e.g. English; that it is very likely to have a symbol for “man” and a symbol for “earth”, and so on. Searle, unlike a computer processor, holds a lot of a priori knowledge about Chinese. He may be able to understand a lot of Chinese just because of this a priori knowledge.
Let us require the human in the Chinese Room to be a primitive, e.g. an Aboriginal, with absolutely no experience of written languages. Let us suppose that Chinese appears so remote to the Aboriginal, that she would never link it to humans (to the way she communicates) and always regard it as something alien. She would never use knowledge of her world, even if somebody tells her to run a given program to manipulate Chinese symbols. In this respect, she would be exactly like the computer processor and have no prior linguistic knowledge. The Chinese Room Argument is then reformulated: can a program to be run by the Aboriginal teach her Chinese (or, as a matter of fact, any other language)?
I am going to reply that yes, a program to be run by the Aboriginal can teach her a language. I am going to call this reply the “semiosis reply”.
Semiosis is the performance element involving signs. A sign, during semiosis, get interpreted and related to an object. Signs can be symbols of Chinese text or of English text, that a human may recognize. An object is any thing available in the environment, which may be related to a sign. It has been suggested that artificial systems can also perform (simulated) semiosis [Gomes et al, 2003]. Moreover, it has been suggested that objects can become available not only from sensory-motion experience, but also from symbolic experience of an artificial system [Wang, 2005]. A sign as recognizable by a machine can be related to a position in an input stream as perceived by a machine. For example, the symbol "z" stands for something that is much less frequent in English text than the interpretant which stands to the symbol "e". Semiosis is an iterative process in which the interpretant can become a sign to be interpreted (for example, the symbol "a" can get interpreted as a letter, as a vowel, as a word, as an article, etc). At any given time the machine may select as potential signs any thing available to it, including previous interpretants such as paradigms and any representation it created. I suggest that the machine should also interpret its internal functions and structures through semiosis. This comprises "computation primitives", including conditional, application, continuation and sequence formation, but also high-level functions such as read/write. The meaning that the machine can give to the symbols it experiences as input becomes then increasingly complex. Such a meaning is not given by a human interpreter (parasitic meaning), but it is rather intrinsic to the machine. When a human executes the program on behalf of the machine, it arrives at the same understanding, at the same meaning, i.e. simulating semiosis ultimately amounts to performing semiosis and the Aboriginal can actually learn from the program. (note how existing artificial neural networks, including deep learning for natural language processing, are ungrounded and devoid of meaning. A human, even executing the training phase of the artificial neural network, cannot arrive at any understanding. This is because the artificial neural network, despite its evocative name, at no level simulates a human neural network. On the contrary, semiotic artificial intelligence, despite having no representation of neurons, simulates semiosis occurring in human brains)
Let me tell you how an Aboriginal, called SCA, could learn English just by running a program. Let us suppose that SCA is given as an input the following text in English, the book "The adventures of Pinocchio" (represented as a sequence of characters with spaces and new lines replaced by special characters):
THE ADVENTURES OF PINOCCHIO§CHAPTER 1§How it happened that Mastro Cherry, carpenter, found a piece of wood that wept and laughed like a child.§Centuries ago there lived--§"A king!" my little readers will say immediately.§No, children, you are mistaken. Once upon a time there was a piece of wood...
This input contains 211,627 characters, which are all incomprehensible symbols for SCA. (This input represents a very small corpus compared to those used to train artificial neural networks)
Let me tell you how SCA learns, through only seven reflection actions and via only three iterations of a semiotic algorithm, to output something very similar to the following: "I write" (it is suggested that the first thing SCA could output is “I said”, while more processing would be needed to actually have her output “I write”)
SCA runs the following reflection actions (some of them at a level of meta-language) and iterations of the semiotic algorithm (characterized by a syntagmatic algorithm and by a paradigmatic algorithm).
Reflection action 1
SCA does not know anything about the text she reads, i.e. observes, but she knows that she can reproduce any of its symbols by writing with her stick on the sand. The actions of reading and writing can be put together such that the result of one action is input to the other action. Writing what is read is copying. Moreover, it is logical that when something is read, somebody wrote it.
Reflection action 2
SCA may of course know already about herself. However, let us make this explicit when she observes that an action of reading not corresponding to a previous action of writing indicates that there must be an agent in the world to which an “I” is opposed.
Iteration 1 semiotic algorithm
SCA discovers the paradigm of uppercase and lowercase letters, i.e. symbols which can be in a sequence such that when they follow certain other symbols the first one of them is capitalized and when they follow certain other symbols they are normally not capitalized (let us consider “.§Poor”, “.§”Poor”, “, poor” and “ poor”). Proper nouns, e.g. “Pinocchio”, are an exception as they obey their capitalization rules. This does not hinder SCA from learning that “P” and “p” belong to the same paradigm.
Reflection action 3
SCA looks for a way of applying the action of writing to itself. This amounts to a situation of “reported writing” when someone writes someone (else) wrote something. It identifies possible candidates for the content of this action in words which stand out from other words due to capitalization. One candidate could be capitalized words, i.e. the words that begin each new sentences. Another candidate could be all the words in direct speech (such as "A king!" in the excerpt above). This reflection action takes advantage from the fact that “The adventures of Pinocchio” contains direct discourse. It would be more complicated for SCA to identify reported speech in a text not using direct discourse, i.e. using only indirect discourse.
Iteration 2 semiotic algorithm
SCA considers any sequence of two words and discovers several paradigms of words (words are considered in both their uppercase and lowercase versions). A paradigm comprise “said”, “cried” and “asked”. Another one “Pinocchio” and “Geppetto”. A third one “boy”, “man”, “voice”, “Marionette” and “Fairy”. These paradigms are discovered only because of side effects existing in the text, in particular of word adjacency side effects.
Iteration 3 semiotic algorithm
SCA considers syntagms made up of words and comprising paradigms of the previous iteration. A more refined paradigm is created to contain words which are in a similar relationship to quotations and direct discourse as “said”, “cried” and “asked”. Let us refer to this paradigm as p_saying_verbs.
Reflection action 4
SCA compares occurrences of words in p_saying_verbs with occurrences of other words. She makes the hypothesis that words in paradigm p_saying_verbs correspond to the action of writing.
Reflection action 5
SCA makes the hypothesis that, when in combination with words corresponding to the action of writing, other words such as “Pinocchio” and “man” correspond to the agent of writing.
Reflection action 6
SCA considers the fact that there is a proper noun which is appearing almost only in direct discourse, but does not seem to be writing-capable: the first person pronoun “I”, which occurs more than 500 times in direct discourse. SCA makes the hypothesis that, when she reads “I” in direct discourse, someone is self-referring.
Reflection action 7
SCA makes the hypothesis that when she performs an action of writing, she may refer to this action using a word in the paradigm p_saying_verbs and the word “I”. Therefore, she may write: “I said”.
From “I said” to “I write”
SCA does not know about verb tenses. She can run however another iteration of the semiotic algorithm to find out the morphemes of verb conjugation “-s”, “-ed” and “-ing”. She then expands paradigm p_saying_verbs to include also “say” and “saying”. She uses Occam’s razor to select the simplest hypothesis for referring to an action of writing.
She considers:
- the 6 occurrences in direct discourse of the sequence "I said" (always in the presence of quotations inside quotations);
- the 6 occurrences in direct discourse of the sequence "I say" (mostly in the presence of exclamation marks);
- the 2 occurrences in direct discourse of the sequence "I ask" (in the presence of question marks).
She finds that quotations inside quotations are more special than exclamations marks, which occur more than 600 times. Therefore, “I say” is the simplest explanation for referring to her action of writing. SCA can output this sequence.
Finally, interacting with SCA could make her learn to use the words “I write” instead. Let us suppose that we can send a message to SCA of the form "No, you write". Based on this message, SCA searches the corpus again and makes the hypothesis that "write" and "written" belong to the same paradigm ("written" is the irregular past participle of "write"). SCA retrieves the following 4 passages about written signs, which all contain the word "written":
"Oh, really? Then I'll read it to you. Know, then, that written in letters of fire I see the words: GREAT MARIONETTE THEATER.
on the walls of the houses, written with charcoal, were words like these: HURRAH FOR THE LAND OF TOYS! DOWN WITH ARITHMETIC! NO MORE SCHOOL!
The announcements, posted all around the town, and written in large letters, read thus:§GREAT SPECTACLE TONIGHT
As soon as he was dressed, he put his hands in his pockets and pulled out a little leather purse on which were written the following words:§The Fairy with Azure Hair returns§fifty pennies to her dear Pinocchio§with many thanks for his kind heart.
SCA then makes the hypothesis that the paradigm of "write" and the paradigm of "say" belong to the same paradigm, and that when reporting about its own displaying the former should be used. Finally, SCA outputs the sequence "I write", nowhere to find in the corpus.
We can give SCA instructions for each of the reflection actions and each of the semiotic algorithm iterations. We can write these instructions in a way they can be executed by a computer processor, i.e. in a way they are a computer program. I have suggested that the program should use compositable high-level functions only (operations in Peano arithmetics instead of calls to a black-box arithmetic logic unit, see [Targon, 2016]) so that it operates only with cognitively grounded semiotic symbols, it can automate reflective programming (see [Targon, 2018]) and it can simulate semiosis. It follows that when the program is executed by a human, the human achieves semiosis ("semiosis reply" to the Chinese Room Argument).
Bibliography
Gomes, A., Gudwin, R., Queiroz, J. (2003), "On a computational model of Peircean semiosis", In: Hexmoor, H. (ed.) KIMAS 2003: 703-708
Wang, P. (2005), "Experience-grounded semantics: a theory for intelligent systems", Cognitive Systems Research, 6: 282-302
Targon, V. (2018), "Towards Semiotic Artificial Intelligence", BICA 2018.
13 comments
Comments sorted by top scores.
comment by TruePath · 2018-08-19T10:49:50.876Z · LW(p) · GW(p)
You are getting the statement of the Chinese room wrong. The claim isn't that the human inside the room will learn Chinese. Indeed, it's a key feature of the argument that the person *doesn't* ever count as knowing Chinese. It is only the system consisting of the person plus all the rules written down in the room etc.. which knows Chinese. This is what's supposed to (but not convincingly IMO) be an unpalatable conclusion.
Secondly, no one is suggesting that there isn't an algorithm that can be followed which makes it appear as if the room understands Chinese. The question is whether or not there is some conscious entity corresponding to the system of the guy plus all the rules which has the qualitative experience of understanding the Chinese words submitted etc.. As such the points you raise don't really address the main issue.
Replies from: Valerio↑ comment by Valerio · 2018-08-27T21:14:14.561Z · LW(p) · GW(p)
TruePath, you are mistaken, my argument addresses the main issue of explaining computer understanding (moreover, it seems that you are making confusion between the Chinese room argument and the “system reply” to it).
Let me clarify. I could write the Chinese room argument as the following deduction argument:
1) P is a computer program that does [x]
2) There is no computer program sufficient for explaining human understanding of [x]
=> 3) Computer program P does not understand [x]
In my view, assumption (2) is not demonstrated and the argument should be reformulated as:
1) P is a computer program that does [x]
2’) Computer program P is not sufficient for explaining human understanding of [x]
=> 3) Computer program P does not understand [x]
The argument still holds against any computer program satisfying assumption (2’). Does however a program exist that can explain human understanding of [x] (a program such that a human executing it understands [x])?
My reply focuses on this question. I suggest to consider artificial semiosis. For example, a program P learns solely from symbolic experience of observing a symbols in a sequence that it should output “I say” (I have described how such a program would look like in my post). Another program Q could learn from symbolic experience solely how to speak Chinese. Humans do not normally learn these ways a rule for using “I say” or how to speak Chinese, because their experience is much richer. However, we could reason about the understanding that a human would have if he could have only symbolic experience and the right program instructions to follow. The semiosis performed by the human would not differ from the semiosis performed by the computer program. It can be said that program P understands a rule for using “I say”. It could be said that the computer program Q understands Chinese.
You can consider [x] to be a capability enabled by sensory-motion. You can consider [x] to be consciousness. My “semiosis reply” could of course be adapted to these situations too.
↑ comment by Said Achmiz (SaidAchmiz) · 2018-08-28T00:51:36.221Z · LW(p) · GW(p)
Let me clarify. I could write the Chinese room argument as the following deduction argument:
- P is a computer program that does [x]
- There is no computer program sufficient for explaining human understanding of [x] => 3) Computer program P does not understand [x]
This is not at all correct as a summary of Searle’s argument.
A more proper summary would read as follows:
- P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
- If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
- Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
That the Room does not understand Chinese is not a conclusion of the argument. It’s taken as a premise; and the reader is induced to accede to taking it as a premise, on the basis of the “intuition pump” of the Room’s description (with the papers and so on).
Now, you seem to disagree with this premise (#2). Fair enough; so do I. But then there’s nothing more to discuss. Searle’s argument collapses, and we’re done here.
The rest of your argument seems aimed at shoring up the opposing intuition (unnecessary, but let’s go with it). However, it would not impress John Searle. He might say: very well, you propose to construct a computer program in a certain way, you propose to expose it to certain stimuli, yes, very good. Having done this, the resulting program would appear to understand Chinese. Would it still be some deterministic algorithm? Yes, of course; all computer programs are. Could you instantiate it in a Room-like structure, just like in the original thought experiment? Naturally. And so it would succumb to the same argument as the original Room.
Replies from: Valerio↑ comment by Valerio · 2018-08-28T11:08:49.083Z · LW(p) · GW(p)
A more proper summary would read as follows:
1. P is an instantiated algorithm that behaves as if it [x]. (Where [x] = “understands and speaks Chinese”.)
2. If we examine P, we can easily see that its inner workings cannot possibly explain how it could [x].
3. Therefore, the fact that humans can [x] cannot be explainable by any algorithm.
I have some problem with your formulation. The fact that P does not understand [x] is nowhere in your formulation, not in premise #1. Conclusion #3 is wrong and should be written as "the fact that humans can [x] cannot be explainable by P". This conclusion does not need the premise that "P does not understand [x]" but only premise #2. In fact, at least two conclusions can be derived from premise #2, including a conclusion that "P does not understand [x]".
I state that - using a premise #2 that does not talk about any program - both Searle's conclusions hold true, but do not apply to an algorithm which performs (simulates) semiosis.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-08-28T15:34:11.965Z · LW(p) · GW(p)
The fact that P does not understand [x] is nowhere in your formulation, not in premise #1.
Yes it is. Reread more closely, please.
Conclusion #3 is wrong and should be written as “the fact that humans can [x] cannot be explainable by P”.
That is not Searle’s argument.
I don’t think anything more may productively be said in this conversation as long as (as seems to be the case) you don’t understand what Searle was arguing.
↑ comment by TruePath · 2018-09-19T02:30:36.864Z · LW(p) · GW(p)
If you want to argue against that piece of reasoning give it a different name because it's not the Chinese room argument. I took multiple graduate classes with professor Searle and, while there are a number of details Said definitely gets the overall outline correct and the argument you advanced is not his Chinese room argument.
That doesn't mean we can't talk about your argument just don't insist it is Searle's Chinese room argument.
Replies from: Valerio↑ comment by Valerio · 2018-10-28T22:26:11.371Z · LW(p) · GW(p)
In his paper, Searle brings forward a lot of arguments.
Early in his argumentation and referring to the Chinese room, Searle makes this argument (which I ask you not to mix with later arguments without care):
it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank's computer understands nothing of any stories. whether in Chinese. English. or whatever. since in the Chinese case the computer is me. and in cases where the computer is not me, the computer has nothing more than I
Later, he writes:
the whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese.
I am framing this argument in a way it can be analyzed:
1) P (the Chinese room) is X (a program capable of passing Turing test in Chinese);
2) Searle can be any X and not understanding Chinese (as exemplified by Searle being the Chinese room and not understanding Chinese, which can be demonstrated for certain programs)
thus 3) no X is understanding Chinese
Searle is arguing that “no program is understanding Chinese” (I stress this in order to reply to Said). The argument "P is X, P is not B, thus no X is B" is an invalid syllogism. Nevertheless, Searle believes in this case that “P not being B” implies (or strongly points towards) “X not being B”.
Yes, Searle’s intuition is known to be problematic and can be argued against accordingly.
My point however is that there is out there in the space of X a program P that is quite unintuitive. I am suggesting a positive example of “P possibly understanding Chinese” which could cut short the debate. Don’t you see that giving a positive answer to the question “can a program understand?” may bring some insight in Searle’s argument too (such as developing it into a "Chinese room test" to assess whether a given program can indeed understand)? Don't you want to look into my suggested program P (semiotic AI)?
In the beginning of my post I made it very clear:
Humans learn Chinese all the time; yet it is uncommon having them learning Chinese by running a programReplies from: TruePath
↑ comment by TruePath · 2018-11-16T04:50:09.452Z · LW(p) · GW(p)
Searle can be any X?? WTF? That's a bit confusingly written.
The intuition Searle is pumping is that since he, as a component of the total system doesn't understand Chinese it seems counterintuitive to conclude that the whole system understands Chinese. When Searle says he is the system he is pointing to the fact that he is doing all the actual interpretation of instructions and is seems weird to think that the whole system has some extra experiences that let it understand Chinese even though he does not. When Searle uses the word understand he does not mean demonstrate the appropriate input output behavior he is presuming it has that behavior and asking about the system's experiences.
Searle's view from his philosophy of language is that our understanding and mening is grounded in our experiences and what makes a person count as understanding (as opposed to merely dumbly parroting) Chinese is that they have certain kinds of experiences while manipulating the words. When Searle asserts the room doesn't understand Chinese he is asserting that it doesn't have the requisite experiences (because it's not having any experiences) that someone would need to have to count as understanding Chinese.
Look, I've listened to Searle explain this himself multiple times during the 2 years of graduate seminars on philosophy of mind I took with him and have discussed this very argument with him at some length. I'm sorry but you are interpreting him incorrectly.
I know I'm not making the confusion you suggest because I've personally talked with him at some length about his argument.
comment by binary_doge · 2018-08-28T02:00:10.436Z · LW(p) · GW(p)
I need some clarification on what seems to be a hidden assumption here... Correct me if I'm wrong, but you seem to be assuming that SCA knows that the symbols she is getting are representations of something in the universe (i.e. that they are language).
Let's assume that SCA thinks she is copying the patterns that naturally dripping sap creates on the sands on the floor of a cave.
It follows that all of these statements are not inferred:
"Moreover, it is logical that when something is read, somebody wrote it."
"[...] she observes that an action of reading not corresponding to a previous action of writing indicates that there must be an agent in the world to which an “I” is opposed."
"This does not hinder SCA from learning that “P” and “p” belong to the same paradigm."
and so on.
Replies from: Valerio↑ comment by Valerio · 2018-08-28T10:44:27.629Z · LW(p) · GW(p)
SCA infers that "somebody wrote that" where the term "somebody" is used more generally than in English.
SCA does not infer that another human being wrote that, but rather that a casual agent wrote that, maybe spirits of the caves.
If SCA enters two caves and observes natural patterns in cave A and the characters of "The adventures of Pinocchio" in cave B, she may deduce that two different spirits wrote them. Although she may discover some patterns in what spirit A (natural phenomena) wrote, she won't be able to discover a grammar as complex as in cave B. Spirit B wrote often the sequence "oor ", preceded sometimes by capital " P", sometimes by small " p". Therefore, she infers that symbols "p" and "P" are similar (at first, she may group also "d" with them, but she may correct that thanks to additional observations).
There is no hidden assumption that SCA knows she is observing a language in cave B. SCA is not a taught cryptographer, but rather an Aboriginal cryptographer. She performs statistical pattern matching only and makes the hypothesis that spirit B may have represented the concept of writing by using a sequence of letters "said". She discards other hypotheses that just a single character may correspond to the concept of writing (although she has some doubt with ":"). She discards other hypotheses that capitalised words are words reported to be written. On the other side, direct discourse in "The adventures of Pinocchio" supports her hypothesis about "said".
SCA keeps generating hypotheses that way so that she learns to decode more knowledge, without the need of knowing that the symbols are language (she rather discovers the concept of language).
Replies from: binary_doge↑ comment by binary_doge · 2018-08-28T14:10:06.468Z · LW(p) · GW(p)
But the fact that it is purposeful writing, for example by a spirit, is an added assumption... SCA doesn't have to think that, she could think its randomly generated scribbles made by nature. Like how she doesn't think the rings on the inside of a tree are a form of telling a story. They are just meaningless signs. And if she does not think the signs have meaning, your statements don't follow (having scribbles doesn't mean that some other agent necessarily made them, and since the scribbles don't point to anything in reality there is no way to understand that P and p are of some same type of item). Thus, there exists a human to be put in a Chinese Room that can make the room replicate the understanding of Chinese without knowing Chinese herself.
Replies from: Valerio↑ comment by Valerio · 2018-08-28T16:17:08.106Z · LW(p) · GW(p)
Uhm, an Aboriginal tends to see meaning in anything. The more the regularities, the more meaning she will form. Semiosis is the dynamic process of interpreting these signs.
If you were put in a Chinese room with no other input than some incomprehensible scribbles you will probably start considering that what you are doing has indeed a meaning.
Of course, a less intelligent human in the room or a human put under pressure would not be able to understand Chinese even with the right algorithm. My point is that the right algorithm enables the right human to understand Chinese. Do you see that?
Replies from: binary_doge↑ comment by binary_doge · 2018-08-28T19:50:54.999Z · LW(p) · GW(p)
Then that's an unnecessary assumption about Aboriginals. Take a native Madagascan instead (arbitrary choice of ethnicity) and he might not.
As far as I know it is not true, and certainly not based on any concrete evidence, that humans must see intentional patterns in everything. Not every culture thought cloud patterns were a language for example. In such a culture, the one beholding the sky doesn't necessarily think it displays the actions of an intentful agent recording a message. The same can be true for Chinese scribbles.
If what you're saying was true, it would be a very surprising fact that there are a whole bunch of human cultures in history that never invented writing.
At any rate, if there exists a not-an-anomaly-example of a human that given sufficient time could not learn Chinese in a Chinese Room, the entire argument as a solution to the problem doesn't hold (lets call this "the normal man argument").
If it were enough that there exists a human that *could* learn Chinese in the room, then you could have just given some example of really intuitive learners throughout history or some such.
It is enough for the original Chinese room to show a complete system that emulates understanding
Chinese, but no part of it (specifically the human part) understands Chinese, and therefore you can't prove a machine is "actually thinking" and all that jazz because it might be constructed like the aforementioned system (this is the basis for the normal man argument).
Of course, there are answers to this conundrum, but the one you posit doesn't contradict the original point.