Thought as Word Dynamics
post by Paul Jorion · 2024-03-30T10:19:57.325Z · LW · GW · 0 commentsContents
Thought as Word Dynamics Paul J. Jorion, Université Catholique de Lille II. Statics III_ Dynamics IV. Implications 2. This Network is stored in the human brain 3. A talking subject experiences the dynamics as being "affective" II. Statics 4. The Network comprises a subset of the words (the "content words") of a particular natural language 5. The individual unit of the Network is a word-pair 6. Each such word-pair has at any time an affect value attached to it 7. The word-pairs and their affect value result from Hebbian reinforcement 9. The hereditary principle within the memory network [is isomorphic to the mathematical object called a "Galois Lattice"] 10. The endogenous principle is isomorphic to the mathematical object called a "P-graph" 11. The endogenous principle is primal 12. The hereditary principle is historical: it allows syllogistic reasoning and amounts to the emergence of "reason" in history III. Dynamics 13. The skeleton of each speech act is a path of finite length in the Network 14. A speech act is the outcome of several "coatings" on a path in the Network 16. The utterance of a speech act modifies the affect values of the word-pairs activated in the act 17. The gradient descent re-establishes an equilibrium in the network 18. Imbalance in the affect values attached to the network has four possible sources 19. In the healthy subject each path has inherent logical validity; this is a consequence of the topology of the network 20. Neurosis results from imbalance of affect values on the network preventing normal flow (Freudian "repression") 21. Psychosis amounts to defects in the network's structure (Lacanian "foreclosure") IV. Implications 22. Speech generation is automatic and only involves the four sources mentioned above 23. Speech generation is deterministic 24. There is no room for any additional "supra-factor" in speech act generation than the four mentioned above 25. One such superfluous "supra-factor" is "intentionality" triggered by consciousness or otherwise None No comments
Thought as Word Dynamics
Paul J. Jorion, Université Catholique de Lille
paul.jorion@univ-catholille.fr
The following is a manuscript written in the year 2000 meant to become a volume 2 of my French book Principes des systèmes intelligents published in 1989 by Masson in Paris.
The model presented here has been built over a number of years from several angles. It is truly multi-disciplinary in its concept as the evidence which has been gathered to support its plausibility has five main components: (i) mathematical objects that have been explored by the author in another context as part of his anthropological research on modelling kinship networks and so-called "primitive mentality" types of reasoning, (ii) intensive and extensive logic, (iii) linguistics, (iv) psychology — including Freudian metapsychology, (v) the introspective part of philosophy that amounts to twenty-five centuries of speculative cognitive science.
The ambition here is to provide a framework for speech acts, being specific enough about its statics and dynamics that it is testable as an Artificial Intelligence application (i.e. can be written as source code). The test began many years ago when, being part of British Telecom's "Connex" Project, I designed ANELLA: an "Associative Network with Emergent Logic and Learning Abilities" (1987-1990). The project is currently being revived as SAM: Self-Aware Machines (www.pribor.io).
- Overall principles:
1. Speech acts are generated as the outcome of a dynamics operating on a network
2. This network is stored in the human brain
3. A talking subject experiences the dynamics as being emotional or "affective"
II. Statics
4. The Network comprises a subset of the words (the "content words") of a particular natural language
5. The individual unit of the Network is a word-pair
6. Each such word-pair has at any time an affect value attached to it
7. The word-pairs and their affect value result from Hebbian reinforcement
8. The Network has two principles of organisation: hereditary and endogenous
9. The hereditary principle is isomorphic to the mathematical object called a "Galois Lattice"
10. The endogenous principle is isomorphic to the mathematical object called a"P-graph'
11. The endogenous principle is primal
12. The hereditary principle is historical: it allows syllogistic reasoning and amounts to the emergence of "reason" in history
III_ Dynamics
13. The skeleton of each speech act is a path of finite length in the Network
14. A speech act is the outcome of several "coatings" on a path in the Network
15. The dynamics of speech acts is a gradient descent in the phase space of the Network submitted to a dynamics
16. The utterance of a speech act modifies the affect values of the word-pairs activated in the act
17. The gradient descent re-establishes an equilibrium in the Network
18. Imbalance in the affect values attached to the Network has four possible sources
- Speech acts of an external origin, heard by the subject
- Bodily processes experienced by the speaking subject as « moods »
- Speech acts of an internal origin: thought processes as « inner speech » or hearing oneself speak (being a sub-case of 2.)
- Empirical experience
19. In the healthy subject each path has inherent logical validity; this is a consequence of the topology of the Network
20. Neurosis results from imbalance of affect values on the Network preventing normal flow (Freudian "repression")
21. Psychosis amounts to defects in the Network's structure (Lacanian « foreclosure")
IV. Implications
22. Speech generation is automatic and only involves the four sources mentioned above
23. Speech generation is deterministic
24. There is no room for any additional "supra-factor" in speech act generation than the four mentioned above
25. One such superfluous "supra-factor" is "intentionality" triggered by consciousness or otherwise
- Overall principles:
- Speech acts are generated as the outcome of a dynamics operating on a Network
The general hypothesis that "speech acts are generated as the outcome of a dynamics operating on a network" is specific when it states that the data, the "words", summoned in the generation of speech acts are structured in a network. It is also informative when it distinguishes two parts to the mechanism: a « statics », being the Network itself and a "dynamics" - so far unqualified - operating on it. It is somewhat trivial however in many other respects: one may wonder indeed if there is any logical alternative at all to the way the hypothesis is here formulated. For example, speech performance unfolds in time and is therefore out of necessity a dynamic process; also, any dynamics necessarily operates on a substrate constituting its "statics". In the case of speech performance the statics automatically comprises the building bricks of speech acts, i.e. the words that get combined sequentially into speech acts.
About the nature of the statics, unless the full complexity of speech performance is assigned to its dynamics, some of its structure is bound to reflect the static organisation of the data. Sure, there is no compelling reason for considering that these are structured at all: the converse hypothesis cannot even be dispelled that speech performance results from an extremely complex dynamics operating on unstructured data, sentences being generated through picking individual words on demand from a repository where they are randomly stored. At the same time, this converse hypothesis would suppose a highly uneconomical method for dealing with the task of generating a sequentially organised output. This would be unexpected as it has been observed that as soon as biological processes reach some level of complexity, the complexity spreads between the substrate and the dynamics operating on it (the process of « emergence », due to self-organisation).
If data ("words") are organised in a manner or other in their repository, one avenue for modelling such organisation is to represent it by the mathematical object known as a graph (a set of ordered pairs). A connected graph (we'll show that the connectedness of the graph is a condition for the rationality of the speech acts uttered) is what one refers to in non-technical terms as being a "network". In other words saying that the dynamics of speech performance operates on a network amounts to saying simply that its substrate of words is "in some way" and "in some degree" organised. Saying that this Network is connected amounts to saying that the full lexicon of the language is available whenever a clause is generated. [As will be shown below (section 21), in psychosis, only part of the lexicon is available at any one time for speech performance. Neurosis (section 20) corresponds to the less dramatic circumstances when individual words and therefore particular paths in the network are inaccessible, the whole lexicon remaining accessible, sometimes though through convoluted and cumbersome ways.]
2. This Network is stored in the human brain
It is clear that speech acts are produced from within a talking subject, as it is the mouth that utters them. This does not imply out of necessity that the Network mentioned above, along with its data, is stored within the talking subject. It can however be inferred that such is indeed the case, essentially for lack of a viable alternative.
Suppose that the Network were located elsewhere than within the talking subject, meaning that the substrate to speech acts lies outside his body. Then there would need to be for the talking subject some means through which he or she communicates with this outer Network, either that this external source acts as a sender and the speaker as a receiver, or that the talking subject can tap the source from a distance.
Quite interestingly, it is a distinctive feature of those individuals that the majority of humans regard as mentally deranged that they postulate the existence of such an outside source for speech acts and claim that their words, or their inner speech, is being interfered with by an obtrusive sender. [We will show below (section 21) why it should be expected from a network whose connectedness is broken that it assumes that it cannot be itself the source of the speech performance it utters. With connectedness lost, the disconnected parts of the network have ceased to communicate, they generate speech independently: the emergence of speech acts from another part of the network is perceived as being from an external source by every other part.]
If there is an external source to speech performance, whether it is acting as a sender or constitutes a repository accessed by the talking subject, there should be some circumstances where the communication is broken or at least impaired because of some physical obstacle interfering with it. Such impairments can be observed in the case of any electromagnetic waves: only gravitational waves are supposedly immune to blockage but their existence remains hypothetical. Nothing of the sort is observed with speech performance-. individuals swimming at the very bottom of the ocean, walking on the moon, or prisoners of a lead-coated concrete bunker don't show any reduction in their capacity for speech. [Their speech performance might be impaired by the circumstances but in this case, other causes are more likely candidates than distance from a sender: lack of oxygenation of the brain, sense deprivation, etc.]
It is therefore reasonable to assume that the Network acting as the substrate for speech performance, i.e. containing words, is internal to the subject.
Once admitted that the Network is located within the talking subject, its likely container needs unambiguously to be the brain. Indeed lesions to the brain, being accidental or clinically performed, as well as other types of interference, do impair speech performance in very general or in very specific ways. There is by now an abundant literature, that the likes of Broca or Wernicke initiated, showing what consequences in terms of aphasia or agnosia, i.e. impairments in speech performance, or thinking, of various natures, specific lesions of the brain induce or interfere with it functioning (the works of Saks, and Damasio and Damasio have popularised such accounts). Let us notice however that such observations, taken in isolation, are insufficient to invalidate the hypothesis of the externality of the Network: it could be indeed that lesions simply hinder reception from an external sender, or impair the brain's capacity at tapping an outer repository. It is only once admitted as most plausible that the body of the talking subject holds the Network, being the substrate for speech performance, that the brain shows to be the probable location for it.
Beyond this deductive probability, is there any further plausibility for the brain containing the type of network we're having in mind? There is indeed: the brain is known to contain a network constituted of nerve cells or neurones. In the coming pages we will constantly check if the Network we're talking about here and the one made of nerve cells can possibly be the same.
3. A talking subject experiences the dynamics as being "affective"
A thoroughly "physical" account of the objective dynamics of speech performance will be provided later. In the meantime we indicate that as far as the talking subject is concerned, its subjective experience of the dynamics of speech performance is — from the initiation of a speech act to its conclusion — one of an emotional, or "affective" nature. The view commonly held is that emotions hinder the expression of rational thinking. It is true that beyond a certain threshold, emotion may turn into various forms of disarray and impair speech performance. In normal circumstances however, the "expression of one's feelings" — which is the spontaneous way people describe the motive behind their speech acts —results in rational discourse. The reason why is that the Network underlying speech performance is structured, channelling speech performance along branching but constrained paths. In such way that the expression of one's feelings engenders out of necessity one or more series of meaningful sentences.
People claim they speak to "express their feelings", "to relieve themselves", "to get something out of their system" and such is indeed the subjective experience of speech performance: talking subjects experience a situation ranging from minor to serious dissatisfaction (the causes of which we'll investigate) and "talk their heart out" until, having reached the end of an outburst of speech, they feel relieved: feeling once again of a "satisfied mind". Until that is, of course, some renewed source of minor irritation launches the dynamics all over. We will show that from an objective point of view the dynamics is no doubt better described as the reaching of a potential well within a word-space under a minimisation dynamics, but it can also justifiably be described as an "affective dynamics", as for the talking subject the process is experienced as one of emotional relief. Also, the parameter determining the dynamics of the gradient descent within the word-space are the "affect" values associated with the words in the "word-space" that the Network constitutes.
II. Statics
4. The Network comprises a subset of the words (the "content words") of a particular natural language
In Indo-European languages there are two types of words. Every locutor has a very strong intuitive feeling of this. We have no difficulty when defining the meaning, of offering a definition, of words of the first type: "a rose is a flower that has many petals, often pink, a strong and very pleasant fragrance, a thorny stem", etc.; "a tire is a rubber envelope to a wheel, inflated with air", etc. With the second type, we're in real trouble: « the word 'nonetheless' is used when one wishes to suggest that while a second idea may — at first sight — look contradictory to one first expressed, it is however the case, etc.". When trying to define a word like "nonetheless" I typically cannot resolve myself to say that it "means" something, I'd rather claim — like I did above — that "it is used when...", and revealingly I am forced to express this usage by quoting — if not a true synonym of it, at least, as with "however"— a word which is used in very similar contexts. The first type of words are often called "content words", the second "framework-" or "structure words". [Not every language deals with such distribution of "content-" and "framework-words" in a similar way. Languages like Chinese and Japanese are much more sparing in their use of "framework-words" than Indo-European languages are. Archaic Chinese for one had very few of those and meaning was emerging essentially from the bringing together (if needed with added strategically set pauses) without further qualification of « content-words".]
Dictionaries have an easy time with the first and a rotten time with the second, doing like done here with "nonetheless": resorting to the cheap trick of referring to a closely related word, the meaning —the usage — of which the reader is supposedly more familiar with. The British philosopher Gilbert Ryle, interestingly called the first type "topic-committed" and the second "topic-neutral". He wrote: "We may call English expressions 'topic-neutral' if a foreigner who understood them, but only them, could get no clue at all from an English paragraph containing them, what that paragraph was about" (Ryle 1954: 116). In the technically unambiguous language used by the medieval logicians, the first were called "categoremes" and the second, "syncategoremes". [Ernest Moody sums up the issue in the following manner : « Les signes et les expressions à partir desquels les propositions peuvent être construites étaient divisés par les logiciens médiévaux en deux classes fondamentalement différentes : les signes syncatégorématiques, qui n'ont dans la phrase qu'une fonction logique ou syntaxique, et les signes catégorématiques (à savoir les "termes" proprement dits) qui ont un sens indépendant et peuvent être les sujets ou les prédicats des propositions catégoriques. On peut citer les définitions qu'a donné Albert de Saxe (1316-1390) de ces deux classes de signes, ou de "termes" au sens large : "Un terme catégorématique est celui qui, considéré par rapport à son sens, peut être le sujet ou le prédicat [...] d'une proposition catégorique. Par exemple, des termes comme 'homme', 'animal', 'pierre', sont appelés catégorématiques parce qu'ils ont une signification spécifique et déterminée. Un terme syncatégorématique, quant à lui, est celui qui, considéré par rapport à son sens, ne peut pas être le sujet ou le prédicat [...] d'une proposition catégorique. Appartiennent à ce genre, des termes comme 'chaque', 'aucun', 'quelque', etc. qui sont appelés signes d'universalité ou de particularité ; et semblablement, les signes de négation comme le négatif 'ne... pas...', et les signes de composition comme la conjonction 'et', et les disjonctions comme 'ou', et les prépositions exclusives comme 'autre que', 'seulement', et les mots de cette sorte" (Logique I). Au XIVe siècle, il devint habituel d'appeler les termes catégorématiques la matière (le contenu) des propositions, et les signes syncatégorématiques (ainsi que l'ordre et l'arrangement des constituants de la phrase), la forme des propositions » (Moody 1953 : 16-17).
Intuitively speaking we can understand this as meaning that "content-words" are essentially concerned with telling us what is the category, the "kind", the "sort" of thing we're talking about; while the second type of words, the "framework-words" are essentially playing a syntactic role, a "mortar" type of role — which would explain why we're at trouble explaining what they "mean" and feel more comfortable describing how they're being "used".
The Network we're talking about is made of "content-words": these are the building blocks of a network where roses connect with red and violets with blue. The other words, the "framework-words" are not part of this Network, they're stored in a different manner, they're summoned to make the "content-words" stick together, as the mortar of a particular kind that will make these words, or these combinations of words, work together within a clause. Like what was mentioned in an attempt to give a definition for "nonetheless": that it is used when the two states of things which are brought together may seem at first sight to be contradictory. In order to ease the clash, to relieve the affective discomfort that comes when contradictory states-of-affair are brought together, a word like "nonetheless" is pasted between the belligerents. With "nonetheless", the state-of-affairs evoked come from distant places in meaning-space with discrepant electrical charges: bringing them together creates an imbalance that needs to be resolved. The talking subject who's connecting in his speech the states-of-affairs that are on either side of the "nonetheless", cringes. So he stuffs between them a "contradiction insulator", a "compatibility patch" like nonetheless. And everything is once again fine. "The Duke knew that his best interest and the Princess's too was that he wouldn't try to see her again. Nonetheless, the following morning...". The "nonetheless" relieves my worry » I won't care for that Duke any more: if he's that kind of fool, well, good for him! What do I care!
"Framework-words" are part of what we will call the "coatings": the coatings that make out of the words found in a finite path along the Network a proper sentence.
5. The individual unit of the Network is a word-pair
As we said, mathematically speaking, a graph is a set of ordered pairs. It can be decomposed in elementary units of pairs, say "cat" and "feline", and each word can be part of more than one of such pairs: "feline" may be associated again, this time with "mammal", and "cat" with "whiskers", etc. Once admitted that what we're talking about is a network it becomes self-evident that its individual units are "word-pairs". It is however possible to go well beyond this trivial observation.
The origin of the medieval notion of the "categoreme" is in Aristotle's short treatise on words called "Categories". Here, the philosopher is only concerned with these words that can act as either a subject or a predicate in a sentence. "Blue" is predicated of the subject "violets" when I say that "violets are blue". Colour" is predicated of "blue", the subject, when I say that "Blue is a colour". It is clear that the words so distinguished as being able to act as subject or predicate amount to those I called earlier "content-words". Why should they be called "categoremes"? Because, Aristotle argues, they can be used in ten different ways, with ten different functions, because there are ten points of views from which "stuffs" can be looked at, the "various meanings of being"; these he calls categories. Here is his explanation:
"Expressions which are in no way composite signify substance, quantity, quality, relation, place, time, position, state, action, or affection. To sketch my meaning roughly, examples of substance are 'man' or 'the horse', of quantity, such terms as 'two cubits long' or 'three cubits long', of quality, such attributes as `white', 'grammatical'. 'Double', 'half ,’greater', fall under the category of relation; 'at the market place', 'in the Lyceum', under that of place', 'yesterday', 'last year', under that of time. 'Lying', 'sitting', are terms indicating position, 'shod', 'armed', indicate state; `to lance', `to cauterise', indicate action; `to be lanced', `to be cauterised', indicate affection. No one of these terms, in and by itself, involves an affirmation; it is by the combination of such terms that positive or negative statements arise. For every assertion must, as is admitted, be either true or false, whereas expressions which are not in any way composite such as 'man', `white', 'runs', 'wins', cannot be either true or false" (Aristotle, Categories, IV).
The most important in this passage are the final words: isolated terms, terms taken on their own cannot be regarded as either true or false: "it is by the combination of such terms that positive or negative statements arise". One can even go one step further: does a term in isolation mean anything? "Of course" is one tempted to say, indeed, as I said earlier, we're at no loss when asked to define a term like "rose". We gave as an example of doing this: "a rose is a flower that has many petals, often pink, a strong and very pleasant fragrance, a thorny stem". We spontaneously assigned the rose the category of substance, of being a flower; we assigned quantity to its petals for being many; we attributed the quality of being pink to its petals, etc. In other words, we brought the rose out of its isolation by connecting it with other words in sentences of which, as Aristotle observed, it will then be possible to say if they are true or false.
Out of the examples that Aristotle mentions, it is blatant that "double", "half', "greater", "two cubits long", "lying", "sitting", "shod", "armed", "runs", "wins" have no meaning unless they are said, predicated, of something else. But after a moment of reflection it becomes obvious that this applies to the other words too: "man", "horse", "white". As we've seen when looking at what is called the definition of a rose, they also, need to be said of something to come alive. In a passage of one of his dialogues, The Sophist, Plato has the Stranger from Elea making an identical point: "The Stranger: A succession of nouns only is not a sentence, any more than of verbs without nouns. […] a mere succession of nouns or of verbs is no discourse. [...] I mean that words like `walks, 'runs', 'sleeps,' or any other words which denote action, however many of them you string together, do not make discourse.[...] Or, again, when you say 'lion,' 'stag, 'horse', or any other words which denote agent — neither in this way of stringing words together do you attain to discourse; […] When any one says 'A man learns,' should you not call this the simplest and least of sentences? [...] And he not only names, but he does something, by connecting verbs with nouns; and therefore we say that he discourses, and to this connection of words we give the name of discourse" (Plato, The Sophist). [Griswold notices that — apart from Parmenides — the anonymous stranger is the single figure in all the dialogues who speaks like a full-blown philosopher; he observes also that while Socrates is present in The Sophist he remains almost mute (Griswold 1990: 365).]
Assuming that there is in the brain a Network being the substrate for speech performance, what would be its element, the smaller unit, to be stored in such a Network? We hold that it would be the "word-pairs" just described, instead of words in isolation. Synaptic connections seem the perfect locus for such storage: the place where the building blocks of the brain's biological network, the neurones, come together. Why not the isolated word? Because, as Aristotle saw it, "word-pairs" are true or false and, as we will see next, something being true or false, is the first condition for it having an affective value, i.e. what brings in motion the dynamics of speech performance.
6. Each such word-pair has at any time an affect value attached to it
The Stoic logicians held that every representation has an author and that no representation should be considered separately from its author's assent'. [In lmbert's words: "According to Sextus [Empiricus] a true representation is one 'of which it is possible to make a true assertion in the present moment'. ... That is to say that interpretation needs to take into account not only the determination of the action as to its occurrence and its objects, but also the determination of the action as to its witness" (lmbert 1999: 113).]
All utterances have an author who commits his person in varying degrees to what is said by him, while his quality (status, competence) determines for other locutors to what extent they can question these utterances and negotiate their content as (prospective) shared knowledge. To each representation that we hold we assent in a specific manner: we don't "believe" as strongly in all we know, we're not prepared to put our reputation at stake in a similar way with all we feel like saying. The way we express our assent to the words we utter expresses the degree in which we identify with them, staking our support with our person — spanning from the non-committal report of a fact in a quotation to the expression of a genuine belief.
The truth or falseness — and a number of possible degrees between these polar values — of a word-pair is stored along with it. Wittgenstein gives the following example: "Imagine that someone is a believer and says: 'I believe in the Final Judgement', and I reply 'Well, I'm not so sure. It is possible'. You would say there's an abyss between our views. Should he say 'It's a German aeroplane flying above us', and I would say 'It's possible. I'm not too sure', you would say that our views are pretty close" (Wittgenstein 1966: 53). The reason why dissenting slightly with the opinion expressed by someone about the Final Judgement or the presence of a German aeroplane reveal in one case a hostile rebuff and in the second a minor difference, resides in the affect values attached to either belief by the one who holds it. The strong identification of a speaker with his views on the Final Judgement renders any questioning of his opinion a rejection; conversely, his minor adhesion with the idea that there is a German plane above him makes any challenge of the view innocuous.
It is the association of affect values with word-pairs that led to the development of the psychotherapeutic technique of "induced association": one word is proposed as an inductor and the subject is asked to come up, "refraining as much as possible from thinking", with another word as a response. The condition that the association should be uninhibited by conscious censorship is supposed to ensure that the first word-pair retrieved is the one with the highest emotional value for the subject. In some early experiments by Jung and Riklin, one subject would associate "Father" with "drunk" and "piano" with "horrible". Jung commented: "the cement that holds together such complex [the word-pair] is the affect which these ideas hold in common" (Jung 1973 [1905]: 321).
The first principle of association is similarity in affective value. Memories, i.e. memory traces are linked to each other through the sensations which compose them and which they are sharing. Such network links allow recollection: the evocation of a memory. Any memory has the potential to represent itself (understood as both presenting itself anew and as "representation") in its whole, i.e. as a configuration of sensations, which were initially perceived simultaneously. Every sensation has a double ability: that of getting imprinted as a memory, i.e. as a configuration of correlated simultaneous sensations, and that of evoking — on a stage traditionally called "imagination" — old memories to which it partakes. Memory allows therefore a sensation to generate within imagination a deferred representation of its former instances. For example, the trumpeting of an elephant evokes within imagination the image of the animal as well as the fear it inspires when it charges.
It is such a network of affect values linked to the satisfaction of our basic needs, and deposited as layered memories of appropriate and inappropriate response, that allows "imagination" to unfold: to stage simulations of attempts at solution. What makes "The Lion, the Witch and the Wardrobe" an appealing title? That in the context of a child's world the threesome brings up similar emotional responses and are therefore conceptually linked (Jorion 1990a: 75-76). The classification of birds by the Kalam of New Guinea is a good example of how emotional association is the prototypical manner in which "stuffs" are associated because they elicit a similar emotional response: "…birds of mystical importance are likely to include representatives of two broad groups: those that normally maintain a considerable distance from man (many may be relatively rare) and which are selected for complex reasons, but, when encountered unexpectedly, are likely to be interpreted in highly mystical ways; and those that interact regularly and spontaneously with men and whose mystical significance derives mainly from the nature of the interactions. In the latter category are birds who call at men in the gardens and are taken as manifestations of ghosts, including some seen as bringing messages due to their chattering in a human-like manner. In the former category are birds who unpredictably and mysteriously startle men and disappear elusively, and are taken to be witches. In addition, birds of mystical significance are often the most salient and numerous species in the classificatory groups in which they occur" (Bulmer 1979: 57).
To any particular subject, a word like "apple" corresponds to the acoustic imprint "apple" and to the visual imprint of the word "apple" composed of the letters a-p-p-l-e. It is generally assumed that a subject would have an emotional response to a word like "apple". But what the technique of "induced association" shows is that words act in word-pairs. People therefore don't have any particular feelings about "apple": it all depends on the apple. The affect value attached to "apple of my eye" is likely to be different from that attached to the apple that Eve handed to Adam, while the apple in the "apples and pears" that one is not supposed to compare is likely to be pretty indifferent to most of us. These various apples sure enough are all called "apple", but apart from this identity in sound which the medieval logicians called "material", they don't share much else: emotional response to such various apples is too different for their identity as acoustic imprints to be more than a superficial likeness rather than a substantive one. ["Material" is the word applied by the medieval logicians to such likeness in sound or in writing: material as opposed to substantial that would apply to a likeness in meaning.]
The elements in the Network are therefore likely to be the word-pairs where apple meets sometimes "my eye", sometimes "Eve" and sometimes "pears". Each of these has an identity of its own, a very special affective value attached to it. It is this affect value that holds the word-pair together - like the forces holding the quarks of an elementary particle - and explains why each half acts as a handle for the other half of the word-pair. The stronger the affect value, the more inseparable the halves of the word-pair. It is the strength of the association that "induced association" exploits to draw a psychological diagnosis.
Practically, the assumption that there is affectively speaking more than one type of apple for a speaking subject suffices to relieve ambiguity, which is a classical difficulty in a sub-field of artificial intelligence known as "knowledge representation". How can a piece of software distinguish the fruit "kiwi" from the bird "kiwi". It cannot as long computers don't attach affect values to word-pairs. [ Unless someone does it on its behalf. This is what I did with ANELLA: I simulated affect values assigned to word-pairs and being modified dynamically through speech performance (Jorion 1990b).]
If elephants are no more than elephants, like an apple, irrespective of what kind of apple it is, there is no way to disambiguate a sentence like "I saw an elephant flying over New York". But if elephants show up in distinct word-pairs, the ambiguity is automatically relieved: the elephants in New York's Zoo do not belong to the same word-pairs as Dumbo the flying elephant.
7. The word-pairs and their affect value result from Hebbian reinforcement
The most common reason, apart from identity in affect value, for the association of words into word-pairs is proximity. This may come under various guises: resemblance covering the full range of each of the senses, contiguity in space or contiguity in time as provided by simultaneity or consecution. The two being recurrently evoked together, Hebbian reinforcement ensures that the connection between these two begins to "stick", i.e. that they are stored in conjunction in long term memory. So, kin get together; or the correlated parts of a single body where each part soon acts as a sign for the others: the tusks come along with the trunk; the hammer with the anvil, lightning and thunder; synonyms are closely related, and also and to no obvious purpose, at the "material" level, homonyms: trunk (torso) and trunk (suitcase) and trunk (snout), etc.
How do affect values get assigned to word-pairs? We hold that affect values are the way a talking subject experiences the strength of the association of the elements in a word-pair. Why would Jung' s patient respond "Drunk" to the stimulus "Father"? Because there has been Hebbian reinforcement: because the sorry story for this person was that her father was recurrently drunk. Not every association though is autobiographical" (see Rubin 1986), i.e. reflecting an individual's special circumstances: most are cultural, meaning that the recurrence of the same experiences for all members of the same cultural environment makes the association universally shared for them. Some exposure is of course not so much experienced as simply "found there" as an existing feature of the lexicon of the language a subject has learned. i.e. a funds shared by speakers of the same language.
The learning process, leading to the storage of a word-pair, is driven by punishment and reward. The process is clearly visible in language acquisition where the child (or any subject learning a new language) tests a word recently heard (not yet learnt though) within a word-pair, on the look-out for either approval or frowned eyebrows (the latter reflecting in the listener the clash of conflicting affect values that mismatched word-pairs engender). Generally speaking, Grice's views on relevance in conversation refers to the art of generating approved-of word-pairs (Grice 1975; 1978). Similarly, Wittgenstein's "the meaning is the use" amounts to "the meaning is the set of word-pairs" where a particular word is represented (Wittgenstein [1953] 1963: § 138-139).
8. The Network has two principles of organisation: hereditary and endogenous
a) "The Chinese way: "penetrable" vs "impenetrable" stuffs
Commentators have been divided over the centuries about where Aristotle's categories come from.
I remind here what they are in Aristotle’s words:
"Expressions which are in no way composite signify substance, quantity, quality, relation, place, time, position, state, action, or affection. To sketch my meaning roughly, examples of substance are 'man' or 'the horse', of quantity, such terms as 'two cubits long' or 'three cubits long', of quality, such attributes as `white', 'grammatical'. 'Double', 'half ,’greater', fall under the category of relation; 'at the market place', 'in the Lyceum', under that of place', 'yesterday', 'last year', under that of time. 'Lying', 'sitting', are terms indicating position, 'shod', 'armed', indicate state; `to lance', `to cauterise', indicate action; `to be lanced', `to be cauterised', indicate affection. No one of these terms, in and by itself, involves an affirmation; it is by the combination of such terms that positive or negative statements arise. For every assertion must, as is admitted, be either true or false, whereas expressions which are not in any way composite such as 'man', `white', 'runs', 'wins', cannot be either true or false" (Aristotle, Categories, IV).
Some hold that these exist in the physical world: according to them these ten manners of making word-pairs reflect the way the world presents itself to our senses (Imbert 1999); some have said instead that the categories reflect the way our mind operates (Sextus Empiricus); some others still hold that the categories simply reflect the grammar of the ancient Greek language and that this is where Aristotle found them (Trendelenburg quoted by Vuillemin 1967; Benveniste 1966).
Whatever the case, one of these categories has a sure footing in the physical world: that of "substance". There are two aspects to a "substance": its matter and its shape. Aristotle distinguishes "primary substances" and "secondary substances". Primary substances are particular entities such as individual men or horses ("neither asserted of a subject nor present in a subject"); secondary substances are such as the species or the genera wherein primary substances are included: iron is a primary substance, metal a secondary substance ("asserted of a subject but not present in a subject"). All other categories are "present in a subject", and some "asserted of a subject" as well. Sometimes it is also said that the species is the primary substance and the genus, the secondary: Oscar is a "man", a primary substance, and an "animal; a secondary substance. In any particular location there can only be one primary substance at the same time. If Peter is sitting on the chair, Paul can sit on his knees but he can't sit at the very same location as Paul. Unlike what happens with primary substances there is no difficulty in bringing together various secondary substances within the same physical location: when Oscar is alone in the kitchen, there is still there, simultaneously, a man, a biped, a mammal, a vertebrate, an animal and a creature.
Distinguishing things between being "penetrable" and "impenetrable" was central to archaic Chinese thought. [René Thom, the inventor of "catastrophe theory" has proposed a "semio-physics" where the concepts of "pregnancy" and "saliency" are central; these correspond broadly speaking to "penetrable" and "impenetrable": "It is therefore possible to regard a pregnancy as an invasive type of, fluid that spreads within the field of the salient forms perceived, the salient form playing the role of a crack' in reality through which the invasive fluid of the pregnancy percolates. Such propagation takes place under two modes: 'propagation through contiguity', 'propagation through similitude', which is the way that Sir James Frazer, in The Golden Bough, classified the magical actions of primitive man. […] contiguity and similitude enlist the respective topology and geometry of our "macroscopic" space; seen this way, there is in Pavlovian conditioning an underlying geometric base" (Thom 1988 : 21)].
In Chinese thought, there would be legitimate ways for combining the penetrable and the impenetrable, like "stone" and "hard", but also the impenetrable with the impenetrable, and this — unlike what happens in Western thought — would be the way that broader types, higher level concepts, are created. For instance "ox-horse" allows to compose the concept of "traction animals", "water-mountain", that of "nature". One can add up two impenetrable names to make a super-ordinate category. A classical paradox of early Chinese logic, Kung-Sun-Lun's claim that "White horse is no horse" derives from the suggestion that higher level concepts could derive similarly from combining penetrable with impenetrable (Hansen 1984; Graham 1989). [To be developed in section 11]. Aristotle's category of substance is an impenetrable, it acts as a substrate whereupon all the other categories can apply as so many coatings of "time", "place, "number", "quality", etc. without any of these getting in the way of any of the others. [That "substance" is a category unlike the other nine is something that Sir David Ross had noticed (Ross 1923: 165-66)]. The primary category of substance is the substratum presupposed by all the others (Ross 1923: 23).These nine categories are penetrables and as far as those are concerned there is no obstacle to piling them up on top of each other. When I say that "violets are blue", nothing prevents me indeed from saying at the same time that "violets are fragrant" or that "violets are pretty". If it is true that I cannot put the impenetrables violet and rose at the very same place at the very same time, I can do so with no difficulty with penetrables such as "blueness", "prettiness" or "fragrance", as long as a violet remains the substrate, the primary substance that allows them to do so.
b) The ancient Greek way: "essential" vs "accidental" properties
There is another distinction Aristotle made, relative to the way things and "states-of-affairs" are, or at least to the way they seem to us, that between "essential" and "accidental" properties. "Essential" properties are those that characterise as such a particular type of "stuff'. It is an essential property of a particular man that he is a speaking creature, or that he is aware of his own mortality. But that he is blind in one eye or that he doesn't shave his beard is an "accidental" property of his.
Concepts, universal words like "birds" or "bees", that is "labels" as I will consistently refer to them, are constituted only of essential properties and this is what makes them conceptual. Individuals, like you or I, "exemplars" as I will consistently call them, are bundles of properties, some essential, some accidental and this is what makes them empirical as opposed to conceptual. To a particular combination of essential properties corresponds a single "stuff' or "sort". This is why it is possible to define unambiguously a particular sort through its "essential" properties, i.e. the characterisation of its essence. When saying that man is a speaking creature who is aware of his own mortality, we're getting closer to the definition of man as a stuff distinct from every other. When saying that some men are blind in one eye or that some grow a beard we're moving away from the essence, to progress into the infinite variety of singular exemplars.
This feature, that "labels" only hold essential properties while "exemplars" combine essential and accidental properties allows (or is a consequence of) a very constructive relationship between exemplars and labels. Exemplars fall under labels, and the essential properties they possess can be seen as having been inherited from the labels they fall under. Exemplars inherit all the properties (out of necessity "essential": labels have no other) of all the labels they fall under, and these properties are essential to them as they are to these labels: they are inherent to their definition. I, as a man, inherit all the properties of all labels I am "underneath": from creature down to man, through animal, vertebrate and mammal. These are essential to these labels and therefore essential to me, their conceptual heir.
Aristotle said of predication that it can always be expressed as "to A, B belongs". "Blueness" belongs to "violets"; "colourfulness" belongs to "blue"; one "apple" belongs to Eve, another "apple" belongs to "my eye". Thus the principle for making word-pairs: to one half, the other half belongs. In common parlance, the elementary force that holds together the halves of a word-pair is expressed as a "is a" or "has a" relationship. That those are the two basic links that compose the Network was intuitively understood in the 1970s at the very beginnings of the knowledge representation debate in artificial intelligence: attempts were made to create entire "semantic networks" from "is a" and "has a" relationships. As will be shown, although insufficient, this assumption was inherently sound.
Broadly speaking, the "is a" relationship is what lies below the "hereditary" principle of organisation. The "has a", is what I call the "endogenous" principle of organisation. I said earlier that the "expression of one's feelings" leads — in normal circumstances — to rational statements, because the pathways being travelled over the Network are channelled. Pathways are etched in the mind/ brain and reflect recurrent usage. Conversely, etched pathways determine the relative ease of future similar associations. Most of the sophistication of our speech performance has its source here: it is the consequence of the fact that the Network - which is the substrate for speech act generation - possesses a very structured topology reflecting both the hereditary and the endogenous principles.
9. The hereditary principle within the memory network [is isomorphic to the mathematical object called a "Galois Lattice"]
What I call "labels" were called by the medieval logicians, categoremes, denoting at the general level as "universals", Aristotle's categories like substance, time, location, quantity, etc. Categoremes are only a subset of what I called earlier "content words" or concepts. The other subset of "content words" is constituted of proper nouns or "demonstratives". These — such as "Albert" — allow speaking of 'exemplars ". Not every exemplar however has got a proper noun such as "Albert", the second manner to give an exemplar a specificity is to refer to it in a deictic manner like in "this chair", i.e. through "showing" it with the help of a word like "this". Both "labels" and "exemplars" belong together to "hereditary" fields.
Sets provide a language for understanding the in/out duality of "labels" and "exemplars". Sets need not always to be regarded as completely defined. We can look at a set as a list of ordered pairs, one element of which, the label, stands for the other element: exemplars in the empirical world of what the label refers to (which might be "objects" but need not to be: words — set labels — may refer to other words, just as sets may refer to other sets). A set may be completely specified in terms of its label (intensive definition) without complete specification of its exemplars; conversely, a set may be completely specified in terms of its exemplars (extensive definition), the complete list of exemplars falling under the label, without full specification of its label. We may define operations on sets which are intensive, extensive, or both. [This is where the incompleteness of Aristotle's system kicks in: sets of labels account for the deductibility (rationality) of the world, the extensive collections of empiricals correspond to the necessity of enumerating non-deductible exemplars (expand).]
["Hereditary" fields (consistently referred to subsequently as F), are structured in the manner of a "Galois lattice" (Freeman and White 1993). As its name suggests, a Galois lattice is a member of the family' of mathematical objects called "lattices", i.e. a non-empty set subject to a partial order. A lattice is a set of elements partially ordered by an inequality < where any pair of elements x and y have a single least clement a (the "least upper bound" or join) such that x < a and y < a and a single greatest element b (the "greatest lower bound" or meet) such that b < x and b < y. A line diagram shows this ordering as oriented lines where x < y if and only if there is an upward path from x to y. [The standard references for Galois lattices are Birkhoff (1967) and repeated in Barbut and Monjardet (1970), Wine (1982) and Duquenne (1987). Lattice computation and drawing is available from Duquenne (1992). "Dual order" lattice is an apt description for the Galois lattice if one has in mind the duality of "intents"( labels) and "extents" (exemplars) closed under intersection.] Galois lattices are of a hierarchical nature and can be accessed either in a deductive manner, working down from labels to exemplars, or in an inductive manner, working up from exemplars to labels.]
A good example, though a deceptively simple one, of a hereditary field, is offered by taxonomies, where a cat is a feline, a feline is a mammal, a mammal is a vertebrate, a vertebrate is an animal and an animal is a creature. Such fields can be travelled in two directions, from the bottom up: if Tom is a cat, then he is automatically a feline, a mammal, a vertebrate, an animal and a creature, i.e. from exemplar to labels of widening generality. [Tom is within a biological taxonomy a "singular" or, in terms of Galois Lattice theory, a "join", the ultimate bottom of the structure, "creature" is a "meet", the ultimate top of a Galois lattice.][As Lucasiewicz was first to notice, Aristotle excludes from his theory of the syllogism both "joins", ultimate exemplars which "singulars" constitute, and "meets", ultimate tops, all encompassing "universals" such as "creature". The reason as he notes is that Aristotle wishes to develop a theory which applies only to categoremes which can appear equally as subject and predicate. Both joins and meets are boundaries: the meet because the chain of generalisation stops at its level, the join because the chain of inherited properties ends with it. "Aristotle emphasises that a singular term is not suited to be a predicate of a true proposition, as a most universal term is not suited to be a subject of such a proposition. […] he eliminated from his system just those kinds of terms which in his opinion were not suited to be both subjects and predicates of true propositions" (Lukasiewicz 1998 [1951]: 7).]
But a hereditary field can also be travelled from the top down, from the most inclusive label down to the singular. The relevance of hereditary fields lies in that the items linked through "is a" relationships, possess at the same time "has a" attributes, linking them in word-pairs with external labels that are inheritable from more general, to less general label. [These external labels are possibly part themselves of other F structures.] Taxonomies of such "penetrables" are however notoriously shallow, i.e. have few levels of organisation, colours for instance are not part of a hierarchy of more general "stuffs". These attributes or properties, link a label through a "has a" relationship with another label, under one of Aristotle's categories with the exception of substance. The relationship is of a "has a" type when considered from subject to predicate: "the King of France has baldness", or a "belonging" quality when seen from predicate to subject: "baldness belongs to the King of France".
Whatever property is attached to a label trickles down to the exemplars beneath it. The labels themselves are automatically inherited down along with their properties, hence the name "hereditary" for the field, And this applies from each level of labelling, down to the singular. Tom is a creature and inherits all properties belonging to creatures: he is an animal also and possesses the more specific features of animals: then, for being one, the more restricted set of features typical of mammals; finally those which only felines hold, as he is a feline. In some way, exemplars inherit down labels while labels inherit up exemplars. [The principle of the Galois Lattice was perfectly understood by John Duns Scotus. Etienne Gilson sums up his view thus: "Any division is the descent from a single principle towards innumerable particular species, and it is always complemented by a reunion ascending from the particular species up to its principle" (Gilson 1922: 15)].
The move up from exemplar and the move down from label are not symmetrical however: heredity of properties doesn't move upwards, properties such as retractile claws, are lost in their generality when moving up from feline to mammals; the female producing milk is lost in the upward move from mammal to vertebrate.
The move upwards [from the join to the meet] is one of inclusion under the "is a" mode: "Tom is a cat", "a cat is a feline", etc. The relationship is transitive: if a cat is a feline and a feline is a mammal, then a cat is a mammal, and if a mammal is an animal which in turn is a creature, then a cat is both an animal and a creature. But the relationship is anti-symmetric, the transitivity does not operate in the opposite direction: not all vertebrates are mammals and not every mammal is a feline. Should we wish to say something about mammals in relation to felines we need to express it as "some mammals are felines" with the implication that some other mammals are precisely no felines. This is why the set of elements is only partially ordered: the ordering does not apply to any pair of elements taken randomly: there is a whole contrast set of labels equally ordered at, for instance, the level of generality where "felines" reside, i.e. "canines", "rodents", etc. This characterisation in terms of "some" like in "some mammals are felines" is what the terminology of logic calls "quantifiers", the quantifier of "particularity" in this instance. The opposite, in terms of "all felines are mammals" is called "universality"; singularity is the quantifier applying to what-we call here "exemplars".
The illustration I gave of a biological taxonomy should not imply that all hereditary fields are similarly of a scientific nature. Nothing prevents a hereditary field from supposing, for instance, that all snakes are witches, while some men are snake-witches, etc. Within the Western world, in antiquity, before the advent of modern science, taxonomies were shallow and one of the most ambitious early attempts at establishing a many-level taxonomy was Aquinas' about angels. There were according to the Angelic(al) Doctor six levels of hierarchy among angels, in descending order from God to man: starting on top with the Seraphims, the Cherubs, the Thrones, the Dominations, the Virtues, the Principalities, the Archangels, down to the "guardian" angels of men, i.e. angels properly so called. [Aquinas supposed a hereditary if imperfect process for angels to transmit their knowledge: "... each angel transmitted to the angel below the knowledge that it received from above, but only in particularised fragments according to the capacity of intelligence of the angel beneath it" (Gilson 1927: 164).]
10. The endogenous principle is isomorphic to the mathematical object called a "P-graph"
The other type of fields that constitute the Network we call "endogenous" (consistently referred to subsequently as G). We hold that these are structured as a P-graph, an algebraic structure being a particular type of dual of a graph, which I first described in 1984 (Jorion & Lally 1984; Jorion 1990b; White and Jorion 1992; White and Jorion 1996). The P-Graph is a particular type of dual of a graph: data (typically "words") are associated with the edges of the graph, the relations between the data, with the nodes: nodes typically stand for word-pairs. The P-Graph is the mathematical object underlying ANELLA, the AI project mentioned in the introduction. The P-Graph — in particular its uncanny way of growing — is compatible with the architecture of an actual biological neural network, its emergent logical and learning abilities are similar to those displayed by human beings. As we will see, a P-graph, or "G" sub-structure connects categoremes across Galois lattices. A P-graph edge exists between two elements if there exists a homomorphism between the lattices they belong to (analogic link), e.g. eye / window; if they sound the same or write the same ("material" connection), e.g. humidity / humility; if they hold an emotional connection (the Jungian "complex"), e.g. father / drunk. Hereditary and endogenous fields criss-cross, and categoremes belong to both in different capacities. In one way, G structures connect F structures. Seen otherwise. F structures provide local organisation to G structures.
None of the shortcomings of Quillian-type semantic networks used for knowledge representation are displayed by the P-Graph representation of a neural network. In contrast with a classical semantic network, concepts are attached to the edges of a graph and relations its nodes. Thus instead of dealing with a semantic network as in figure 1,
one has a situation as in figure 2:
Such transpose is not unique as there is more than one way for transposing the nodes of a graph into edges and edges into nodes, i.e. for obtaining the dual of a graph.
What possible translation is there for such a dual semantic network in terms of an actual biological neural network? In the particular instance of our illustration, "boy" would be attached to a ramification of the axon or to its end-synapse, "meets" to the cell body of the connected neurone and "girl" to a ramification of its axon.
At first glance the dual semantic network does not seem to present any overwhelming advantage over the traditional semantic network scheme. It does however in terms of its neuro-biological plausibility and in more than one way. Let us see why on a couple of illustrations.
On the figures depicting the examples, neurone mappability is emphasised through a slightly modified representation of a directed graph: instead of using as its building blocks either nodes or edges, "graphic neurones" are used — a "graphic neurone" being composed in this instance of a node and a set of outward-branching edges. (This convention is of course precisely that holding in the visualisation of [formal] neural networks). To emphasise a biological neurone interpretation of the figures, no arrow is drawn on an edge, and diverging edges from the same node depart somewhere down a common stem suggesting the ramifications of the axon ending each with a "synapse". Figure 3 depicts this clearly.
Figures 1 and 2 display the straightforward construction of the P-Dual of a simple semantic network containing only two concepts: "Rex" and "dog". Let us add now to the picture the additional concept of a "pet". Figure 4 a and b reveal that here again there is no special difficulty in transposing from the classical template to a P-Graph.
With more intricate cases a specific transpose method becomes however indispensable. This is easily provided by the auxiliary method of an adjacency matrix for the initial template graph. The principle is simple: a double entry table is constructed where nodes of the template graph are located with respect to their location between edges. The matrix is used in a later step as a guide for drawing the P-Graph.
If one specifies now that in addition to being a pet, a dog is also a mammal, Figure 5 shows how this would be represented in a classical semantic network.
Here is the adjacency matrix corresponding to figure 5:
One notices that if the rule for building the adjacency matrix is indeed that of locating a node between edges, some auxiliary edges (a, d and f are required lest "Rex", "pet" and "mammal" are absent from the matrix (what such constraint expresses of having no isolated node in the template graph is in fact the condition for the "neuro-mappability" of the graph).
One is now in a position for constructing the P-Graph by assigning nodes the names of the former edges, and assigning the new edges the labels of the former nodes (the number of nodes in this particular type of a dual is the same as the number of edges in the template). One proceeds in the following manner: having posited the nodes a, b, c, d, e and f, the edges existing between them are drawn as instructed by the adjacency matrix. For instance, there is now an edge "dog" between b and c and another edge "dog' between b and e, etc. Figure 6 shows the derived graph.
If one wants to examine what has happened to the P-Graph with the addition of "mammal", we can compare figure 4 b) with figure 6. A new neurone has shown up to represent "mammal", and "dog" has branched out: a ramification has emerged as a shoot towards "mammal".
If one introduces now a second dog, "Lassie", in the picture, Figure 7 shows first the classical semantic network representation.
And here is the adjacency matrix:
Let us build the P-Graph accordingly, i.e. as shown in figure 8.
A new "Lassie" neurone has appeared, and from it has sprung a new "dog" neurone which has itself shot two ramifications towards the connections held by the original "dog" neurone. In such a way that there are now altogether four "dog" synapses belonging to two distinct "dog" neurones.
Now for a final illustration. Let us drop Lassie and go back to how things stood at an earlier stage when we only had as elements "Rex", "dog", "pet" and "mammal". And let us add "master" whereby we are introducing a new relation of a "has_a" type. Now a pet has a master, but a master has as well a pet. Hence the classical semantic network representation as in figure 9.
And the adjacency matrix that ensues:
The locations of some elements have now become trickier. Notice for instance, "dog" between b and g and between h and g, etc. Figure 10 b) shows the resulting configuration.
Compare with figure 10 a) (corresponding to figure 6) to see what the irruption of "master" has meant in terms of the P-Dual. Firstly, a new "master" neurone has shown up. Secondly, the original "dog" neurone has shot a third ramification towards this additional neurone. Thirdly, an entirely new "dog" neurone has appeared, duplicating the first one — but not perfectly: only as far as synapses are concerned. Fourthly, the new "dog" neurone has established an odd type of symmetrical connection with the "master" neurone; a cycle has appeared in the network between a "dog" neurone and a "master" neurone: one of the synapses of "dog" connects with the "master" cell body while one of the synapses of "master" connects with the "dog" cell body.
One could pursue with illustrations of this kind.
Until we added "master" with its reciprocal relations "master has dog" and "dog has master', neurone bodies were only liable to a single interpretation: the classical "is_a" relationship of semantic networks. Should it be the case that there is only one interpretation for the neurone body, there would be no necessity whatever for attaching any labels to the nodes of a P-Graph: each node would be read out as "is_a" with no ambiguity ensuing. Things changed when we introduced "master" in the graph: from then On, a node; had to be interpreted as meaning either "is_a" or "has_a", necessitating therefore appropriate labelling of nodes. What happened with the "has_a" relation was the intervention of a cycle between the related concepts - which does not exist with the "is_a" relationship. in such a way that labelling the nodes could easily he replaced by a simple decoding of the local configuration. One could issue a rule of the type "should there be an immediate cycle between two neurones, read the node as meaning 'has_a' else read it as 'is_a'".
As has become clear by now, the P-graph model has specific strengths compared to its earlier competitors:
1. it is consistent with the currently known properties of the anatomy and physiology of the nervous system.
2. also, because the P-graph is the dual of a classical semantic network, a word is automatically distributed between a number of instances of itself.
3. these instances arc clustered as to individual semantic use.
4. as announced in section 6, the scourge of knowledge representation, ambiguity, is automatically ruled out e.g. kiwi the fruit and kiwi the animal being only associated through one relationship, the "material" (meaningless) one of homophony, their confusion does not arise: they reside in distant parts of the P-graph.
5. the growth process of the graph explains why early word traces are retrieved faster than those acquired later: the number of their instances is out of necessity large as they have acted repeatedly as "anchor" for the inscription of new words in the process of language acquisition (this allows to do without the extraneous hypothesis that Michael Page mentioned in a recent article in Behavioral and Brain Sciences: " ... a node of high competitive capacity in one time period tends to have high competitive capacity in the next" - 2000: 4.4 Age-of-Acquisition Effects).
Resemblance is no longer a question only of distances measured over a neural network, it covers as well topological similarities. For instance, synonyms do not require to be stored physically close to each other in the brain (indeed it is unlikely they would as synonyms are typically acquired at different times in life rather than simultaneously) as long as they are part of isomorphic configurations of elementary units of meaning. Topological similarity may suffice for resonances to develop between homomorphic sub-networks, allowing synonyms to vibrate in unison.
From what has just been said, the obvious interpretation of the learning process in a Network is that each time a new signifier is added, a number of edges (determined by the P-Graph algorithm) are created representing a number of distinct neurones. As such however, the growth process of a P-Graph cannot reflect the actual learning process taking place in the cerebral cortex.
If the cerebral cortex of a new-born is such that each neurone is connected to a large number of other neurones - i.e. is mappable on a quasi-complete graph - then the structuring of the network for memory storage purposes implies for each neurone a dramatic destruction of most of its existing connections, the remaining ones becoming "informed" precisely because of their drastic reduction in number.
A Network is therefore constituted of two parts: a "virgin", unemployed part composed of quasi-completely connected neurones, and another part, active for memory storage composed of sparsely connected neurones. Learning a new word would then mean including a number of such "virgin" quasi-complete neurones within the active Network, attaching the new signifier's label to their axial ramifications, and making them significant by having most of their connections removed at the exception of those which have become meaningful through their labelling.
An existing neurone would therefore intervene actively in memory storage as soon as it has become structured, i.e. as soon as most of its connections have been severed, the few remaining ones encoding from then on a specific information. Viewed in this way, learning would not consist of the addition of new neurones but of the colonisation of existing but "virgin" neurones belonging to an unemployed part of the cerebral cortex. [It is possible in this case to suppose that such structuring is not strictly deterministic but results from some type of Darwinian competition such as described by Edelman and co-workers (Edelman 1981; Finkel, Reeke & Edelman 1989). If Edelman's analysis is correct, then it may even be possible to imagine that the newly colonised neurones are actually distracted from some other function they were performing until then.]
The only major constraint on such "pruning" for learning purposes would be that the network remains a connected graph (that there remains at least one path connecting each node to every other node), that is that there are no disjoined sub-graphs. [See section 21 where it is argued that here lies the origin of psychosis.] The smaller the number of edges, the more significant is the information contained in the network, as the reduced topology becomes concomitantly more significant. The issue is parallel to that of percolation, but so to speak, in a reversed manner, i.e. pruning should develop as much as can do but not beyond the percolation threshold. [The ultimate means for diminishing the number of edges in the graph is through allowing it to degenerate into a tree. This would not mean that a single completely ordered hierarchy obtains as hierarchies defined by distinct principles can intertwine. Such a principle should not however be sought for as the G relationship has a valuable role to play in the net. The definition of an "even number" is for instance decomposed in the following manner by a module of ANELLA: "even is_a number has_a divisor is_a two". It is clear that the "has_a" is here highly significant and could not possibly be replaced by a "is_a" relation. The "is_a" relationship introduces however a very effective structuring principle in a net as is revealed in contrast by "primitive mentality" where the "has_a" relationship is predominant — if not the only existing one (see on this Jorion 1989).]
The simple rule for neurone colonisation embeds topological information into a P-graph while ensuring redundancy in the representation of any individual word, confirming Page's insight that "localist models do not preclude redundancy" (Page 2000: "They do not degrade gracefully").
Aristotle is the father of what he named Analytics, the ancestor of what we nowadays call "logic". What analytics proposed with the theory of the syllogism are the principles of accurate reasoning or in Aristotle's own words, of "not contradicting oneself". [Aristotle: "The intention of the present treatise is to find a method through which we will be able to reason from generally admitted opinions relating to all problem submitted to us and which will allow us to eschew, when developing an argument, to say anything which would be (self-) contradictory" (Topica, 100 a 18).] Let us take a categoreme, say "cat", and let us consider it in two word-pairs: "cat-whisker" and "cat-feline". If we wish to produce sentences with these pairs, we can say "a cat has whiskers" or "whiskers belong to a cat", and "a cat is a feline" or "some felines are cats". Let us define the distance between the two halves of a word-pair, i.e. between "cat" and "whisker" and between "cat" and "feline", as being the unit, "1". What Aristotle's theory of the syllogism proposes are the rules for designing a sentence that makes sense between "feline" and "whiskers" through "cat", here called the "middle term". In other words, a syllogism provides the rules for valid sentence-making between concepts at a distance of "2" from each other in the Network.
The way this works is well known: "Whiskers belong to cats", "cats are felines" thus (reversal of focus) "Some felines are cats", hence "Whiskers belong to some felines". Ernest Mach (the physicist and philosopher of science) who left his last name as a unit for the speed of sound regarded the task of science as operating nothing more but nothing less than "mental economies" (Mach 1960 [1883]: 577-582). The syllogism is the basic tool for mental economy: we have established with the help of two word-pairs where "cat" is involved, a bridge between the two other halves of these word-pairs: "whiskers" and "felines". We have shortcut "cat" in the conclusion of a syllogism bringing together "whiskers" and "felines" and made thus a "mental economy". John Stuart-Mill held that the syllogism is trivial: it does not offer more information in the conclusion than there was beforehand; the leap, according to him, is in one of the premises: the intellectual boldness is in holding the cat to be a feline, the rest is nothing more than "having brought one's notes together" (in Blanché 1970: 251-252). This is true: the contribution of the syllogism doesn't lie in additional information content, it is in what has just been shown: it offers valid ways for connecting concepts at a distance "2" in the Network. And because hereditary fields, F structures, are transitive through the inclusion of concepts, they potentially offer the ways for connecting concepts at any distance from each other: if "some felines have whiskers" through cats and "felines are mammals" then "some mammals have whiskers", and also "some animals have whiskers", etc.
What Aristotle accomplished with his analytics represents a particular solution to formulating a logic, not necessarily a universally valid one, but at the same time sufficiently resilient that it repels successfully the suspicion that thought is a mostly fallible or culturally variable process. In our terminology, his solution consists of attaching to the top of every F cone the essential attribute of substantiality. Nonetheless at the symbolic level, Aristotle's "reduction" to a materiality that was at least in principle hierarchical provided the foundation for a kind of geometric principle of ordering. Aristotle's use of line diagrams in geometric form as a representation of syllogisms (Ross 1923: 33) antedates the equivalent use of Venn diagrams. The ability to do so — to reduce all syllogisms to line diagrams with a single kind of line, or Venn diagrams with a single kind of set representation of extensional definitions, is foundational to a hierarchical view of logic. In many ways Aristotle was formalising what we call the G operator by embedding abstract (penetrable) properties within a substantial (impenetrable) context. It remained for the F operator as a hierarchical operator to be developed.
The difficulties with Aristotle's formulation are that (1) it forces all content words to be treated as if they were within F cones assigning implicit properties through the inheritance of essential properties, while at the same time (2) it assumes that abstract or concrete definitions cannot exist and be handled on their own (e.g., within F cones), that is, that the potential connection between implicit labels and exemplars is everywhere necessary and defined, i.e. (3) that all propositions can be translated without informational loss into statements of the "has_a" form "A has B" therefore (4) there is no room for our distinction between the F operator implying the heredity of implicit labels (which assumes the possibility of both actual labels [substantives] and implicit labels but does not require their simultaneous presence), and the G operator, which may be thought of as of the form "A has B" where the G operator does not bestow hereditary properties when compounded in the form "A has B has C". [Actually, Aristotle states that all propositions can be expressed as "a belongs to b": ... to state attribution Aristotle does not say "B is A", but "to A, B belongs" (see Hamelin [1905] 1985: 158). Only the relation between primary substance and secondary substance is properly of an (inclusive) "a is b" nature.]
Here another example: "violets are blue", "blue is a colour", hence "violets are a colour". This doesn't work, here is where the syllogism breaks down. This is the point where the fecundity of F structures at providing rules for valid generation of sentences connecting concepts at a distance "2" in the Network break down. By now we understand why from the fact that "violets are blue" and that "blue is a colour" we cannot draw the conclusion that "violets are a colour". Impenetrable things (not only "substances") are the elements that hereditary fields, our F structures, are made of, but not so with the penetrable stuff floating around as the potential coatings of the impenetrable. These operate following a different principle, that of endogenous fields", which we call G structures. The remaining nine categories of Aristotle are what we would call G operators since they are abstract properties. For us, these G operators would need to be programmed within each F cone for them to become operational as Aristotelian logic.
Aristotle would treat "violets are blue" as an explicit label (substantive): "violets" related by inclusion not to a purely abstract category (blueness) but to an implicit label "blue violets." It does not follow from "blue is a colour" that "violets are a colour" but again by implicit labels (blue things are coloured things) that "blue violets are coloured violets". In our approach, however, "blue is a colour" can be a purely abstract statement, an F operator. Thus we need to distinguish "violets are blue" as a G operator mapping an explicit label (substantive): "violets" to one of its essential properties: "blueness".
There is therefore a need for specifying which concatenations are forbidden in order not to generate false syllogisms with conclusions such as "violets are a colour". At the same time, underlying our intolerance with similar conclusions lies our commitment to the primacy to substantiality. Another culture, as we'll see in section 11, may decide that a token can be taken to indicate a generic property.
There are two types of operators for the F structure of hereditary fields. One type acts from the "label" end of hereditary fields. The fields themselves act as transformers of signals that exit via another set of operators at the "exemplar" side. The other set of operators acts in the other direction from the "exemplar" end, generating transformed signal patterns within the fields and exit as operators at the "label" end. Inside the hereditary fields there exist a single operator along with its inverse (i.e. the two possible directions for signals).
An altogether different type of operator (and its inverse) is present in the G structure of endogenous fields, and networks the hereditary, fields. Fields of diverse operators of these sorts, plus a myriad of specialised machinery, provide a dynamic model of the brain.
Nodes of intersection in F cones play the role of generating implicit labels. G operators can attach to any such nodes providing our model with an in-built capability to process and act on implicit labels. When the action includes speech performance a mapping into language obtains. From an engineering point of view it would seem that F cones are capable of treating information from the senses coded as exemplars and identifying types quite efficiently (e.g. predators, friends, foods, types of entities, dangers, resources, opportunities, etc.) but such inferences need to be developed by experience, culturally and linguistically, while Gs are necessarily ubiquitous in moving information around between locations and specialised (including learned) functions in the brain. The diffuse quality of the G network would be the temporally and logically prior structure in which information travels and is bundled and concatenated in ways needed for overall responsiveness and abilities for learning, developing heuristics, creating paths into and out of F cones to convey what may become crucial and precise discriminations once F cones come to be experientially programmed. Children do not enter the world equipped with all the pervasive notions of inheritance and inclusiveness that they may later show capable of using. They will soon come to know something is a fruit or a flower, however in their eyes notions are initially more likely to be apprehended through the more diffuse G fields allowing a variety of ways of learning, experimenting, applying combinatoric search procedures, etc. The efficiencies that potentially reside in the F principle of easily computed hereditary attributes contextualising content words in F cones, needs to be self-programmed: learned or culturally taught.
Our model comprises now two kinds of operators, each implying what would be a different brain implementation, but not yet capable of generating speech performance since they are nothing but two criss-crossing structures within an isolated Network. Additional types of connectives are required for a dynamic system giving expression to the inner states of the organism and signals transduced from the environment. We can imagine hundreds of thousands of hereditary fields corresponding to the observed cylinder-like structures in the brain. Each field has a limited number of meet-irreducible elements activated by signals being input as external stimuli. Particular combinations of such signals are capable of activating any single node in the field but only via the hereditary properties attached to it. This is precisely what the lattices of hereditary properties allow. Thus, the mode of activation of "conceptual" thought, in the abstract hereditary-mode (where labels need not refer to referents), already presupposes a closure operator for expanded labels within the field. Any signal triggering the hereditary fields as input from the "labelling" end is capable of generating abstract thought.
Figure 11
If the hereditary structures are truly wired as lattices, signals corresponding to abstract labels coursing through such structure via hereditary channels will also produce a unique output in terms of the join-irreducible elements representing canonical exemplars of different concepts. The brain is thus capable this way of converting abstract concepts into concrete exemplars, providing for labels an interpretation in terms of exemplars. However, output signals can also be other abstract labels, in other words, some outputs may trigger signals that are re-routed back to the "conceptual input" end of the hereditary fields.
Figure 12
Signals, however, may also enter these structures from the other "concrete" end of the hereditary channels, standing for exemplars, sets of exemplars, or types of exemplars. If a type of exemplar is so identified, the flow of incoming signals through hereditary channels are activating each of the individual nodes or subsets afferent to the given type. Rodents, for example, will thus automatically distinguish this way those "birds of prey" that are of danger to them. Recognition of a particular hawk "known" to the rodent, or of a connected subtype are sufficient signals for instantiation. The "output" signal might look very much like a high-order concept or abstraction since it comes out at the top (conceptual) end of the hereditary, field and delivers its message to the organism's response system.
Let us use colour as an illustration. Localisation of meaning might occur as a set of external signals (e.g., from sense organs such as the retina or associatively from the limbic system) that converge on a limited set of one or more F cones, and let us examine how output signals might be generated. We take our organisational cues from what is known on the one hand about colour perception and on the other hand about the use of colour terms in human languages. Hue, for example, is an ordinally structured domain of signals dominated by the black/white opposition, red being the next most salient element. A partial order creates a fairly narrow lattice running through the saliency of the other basic colour distinctions:
1=> Black/white
|
2=> Red
/ \
3=> Blue Green
\ /
Yellow
etc.
At each level where there are meet-irreducible colours there must be inputs (1, 2, 3) from the external visual apparatus if these nodes are to be activated by visual stimuli, but they can also be activated by other internal signals (e.g., emotions; see D'Andrade 197.), and of course send G type signals to a variety of other brain locations. One can imagine this rather simple lattice quickly complexifying with the addition of a multitude of ranked or ordinal levels of discrimination regarding brightness intersecting with a series of levels for discrimination of saturation, although quantitative phenomena are easily bulked by signal magnitudes on different channels, so a specialised brain structure might come into play that is an analog to an ordinally structured F cone. [This might be a mechanism similar to what I have got in mind with the complexification that the lesion in the Broca area signals. An article in Behavioral and Brain Sciences, "The neurology of syntax: Language use without Broca's area" by Yosef Grodzinsky offers numerous examples of such enhanced complexity at work. Briefly, Grodzinsky's paper establishes that speech-impairments due to lesions in the Broca area do not affect any syntactic cross-linguistic ability but only the linguistic complexity that a typical speaker can achieve, this being reflected — according to tongue — in different domains of what would constitute a hypothetical "universal syntax". Thus typically in English, Broca aphasia corresponds to a lost capacity for introducing a second semantic focus in the clause — most often introduced by a transition with "whom" or "whose" — while in Dutch it is the capacity to inflect the verb — typically located at the very end of the sentence — that gets impaired.]
The prototype brain-structures for the hereditary operator are the cylindrical cones of the neo-cortex, where sensory inputs/outputs flow in opposing directions through the cones, and what we might call endogenous inputs/outputs flow in transposed directions. Thus, sensory inputs may flow "up" and stimulate endogenous "conceptual" outputs "up and out" and or reverberate as sensory outputs "down" the cylinders. Conversely, endogenous inputs may flow "down" and stimulate sensory outputs "down and out" or reverberate as endogenous outputs back "up" the cylinders. The ideal model for exact precision in such neural pathways is, by definition, the Galois lattice, so defined because each combination of input signals has a unique join output element. The Galois lattice as neural network has the capability of a precise dual representation and interaction between conceptual and experiential inputs and outputs.
This is not to say that cylindrical neurological pathways are necessarily organised as lattices, but that as such pathways are entrained into an "exactitude" of unique neurological outputs as a result of particular input combinations, they necessarily approximate such lattices. Formal reasoning abilities — to the extent that they are capable of exactitude — must necessarily be entrained in neural networks that have the capability of exhibiting a lattice-like functioning.
For a dual representation lattice to function properly in terms of its links and its nodes in a neural network, its nodes (neurones) must be capable of "storing" signal potentials that correspond to either conceptual output or sensory output. The only difference between them is only that "sensory" output flows out in an opposing direction to that of "conceptual" output. The synapses simply do the work of transmitting directional signals.
The prototype of the endogenous operator is not a specific brain-structure but the diffuse network of neural pathways which materialise the simple connecting mode observed in other parts of the brain. Unlike the two-mode inputs and outputs of the cortical cylinders, these neural networks are analogous to one-mode networks: connections between elements of a single type, such are the connections linking the myriad of brain cylinders.
Is it possible for endogenous operators corresponding to our primitive notions of "associative", "intuitive", or "emotional" thought, to achieve the degree of precision attained by the lattice-like representation? In the general case, if a pattern of endogenous links in a network is fed to a lattice-interpreted part, the results are unsatisfactory: the lattice output becomes pathological because the unity of elements as senders and receivers of relations is destroyed. The pathology is that of a description of a network pattern based exclusively on snapshots of network actors in terms of their attributes as senders and receivers, not their actual relations.
The precision of endogenous networks consists of a relational description of patterns of links described as reverberating blocks of the network. Blocks are not sets of nodes: they are sets of pair-wise relations. In any network or graph the relations between elements can be partitioned into mutually exclusive blocks such that all pairs or circuits are within the same block. Circuits are circular arrangements of links without regard for the linkage direction. A linked pair is a trivial circuit; proper circuits contain more than two nodes and at least as many links as nodes to complete the circuit. Theorems in graph theory establish that every block with proper circuits and two nodes are connected by more than one independent path. Blocks are thus the largest units of reverberation. Reverberations include mutually reinforcing or mutually dampening linkage or flow patterns. Sharing at most one node in common may link blocks. Block connections cannot reverberate and are thus vulnerable to cut-point disconnection. Blocks themselves contain no cut-points.
Blocks have two properties of fundamental interest for a theory of brain function. First, as noted above, there is internal reverberation within blocks, a potential for network-based (rather than localised) reinforcement or de-inforcement (extinction). Second, the way that blocks are interconnected is in the form of a tree: their interconnections exclude by definition circuits which are inside blocks. Every tree may be said to form a hierarchy or to have a centre. The centre of a tree may be said to be those nodes with the quickest or easiest reachability to all others. Hence the relational block-structure of endogenous networks is invariably capable of strategic control, which can operate most efficiently from centre to periphery. We'll see in sections 20 and 21 how this feature may explain various pathologies observed. The necessarily hierarchical structure of blocks in endogenous networks displays a potential for exact self-representation that differs from that of lattice-like two-mode networks or neural structures.
The F and G functions, or hereditary and endogenous operators, are simply strategies open to the human brain for self-organisation. There is a natural analogy between the F operators and the cylindrical structures and one between the G operators and diffuse network neural organisation. G operators invade however the cylindrical structures, while F operators can be imposed on diffuse network signals (there is a historical process of such progress). We would expect to find variations in how such strategies are implemented, and a variety of normal as well as pathological organisations. Our expectation is that what might be regarded as a wide range of "normal" speech behaviour expresses varying degrees and quantitative differentiation in the extent to which F and G operators are present and linked in any particular Network. Some modes of operation which are judged to be pathologies (without applying that term pejoratively but as a culturally defined normal kind of diagnostic) might result from extremal differences in application of F type reasoning applied to diffuse network inputs, or in application of G type operators ("concrete thinking") to conical network inputs that are otherwise normally susceptible to at least shallow hereditary-logic cognition.
In principle, however, an optimal brain function would utilise these two functions differently:
- the Two-Mode Duality of the Hereditary Network as Lattices
2) the Node-Link Duality of the Endogenous Network as P-Graphs
It is entirely possible that the normal "endogenous neural network" utilises the P-graph strategy of arc-specific labelling, while the normal "hereditary neural network" utilises the lattice strategy of node-specific labelling. We leave to the neurophysiologists the question of whether and how different kinds of neuronal/synaptic cells and functions might be implicated either as hard-wiring or as learned-programming.
On the question of how F and G networks might be connected:
- G operators, typically bearing labels on their synapses (network edges or relations), could interact with F operators that typically bear dual labels neuronally (network nodes) without any particular problem in information processing being so created. The coupling, of course, is through synapses.
2) each type of network, F and G, of course, will bear its own distinctive costs and problems.
11. The endogenous principle is primal
We noted in passing that deep hierarchies are a recent phenomenon when mentioning Aquinas' classification of angels. We're familiar nowadays with the multi-layered hierarchies that the natural sciences have generated over the past four centuries, in the realms of biology, chemistry and physics. The process we've mentioned of the trickling down of essential properties from label to exemplar has no secrets to us anymore since the likes of Linnaeus captured the natural world in a set of Galois Lattices. But what about the thinking process in the period that preceded?
Emotion and relevance are the default chunking strategies. They etch the paths of the G networks and fill in relatively shallow properties of the F fields used for this purpose. The F fields themselves provide ad hoc chunking strategies through definitions via intersection of whatever heritable properties as can be mobilised in a routine fashion to organise speech performance.
The simplest formal chunking strategy is that of the contrast set, and the simplest contrast set is that of binary opposition which needs only its super-ordinate label (e.g., assent: yes / no). Larger contrast sets may also be accommodated under a single label (e.g., primary colour terms: black, white, red, blue, green, yellow, orange, brown, purple). Terms may be, but need not be, exclusive: for assent yes/no/maybe are not exclusive in terms of the middle term. When the number of elements in a contrast set grows large, the chunking strategy is to group items on the basis of super-ordinate similarity (one of the principles of the F cones), often in the form of a taxonomy. The taxonomy, however, presupposes co-exclusivity among pairs of elements at the same level, e.g., mammals are never reptiles. The most general kinds or chunking are the overlapping hierarchical relations that we defined for F cones. Definition itself is the most primitive or fundamental act of chunking. Definition is usually accomplished by replacing a series of implicit or extended labels (and unmarked category) by a label for their intersection, a new word.
I had the opportunity to analyse the working of what amazed twentieth century anthropologists in what were called in the 1920s examples of "primitive thought" and re-surfaced in the 1980s as the "rationality" debate (Jorion 1989). In a classical case of so-called "primitive mentality": the Nuer's (of Southern Sudan) claim that "twins are birds" (as reported by the anthropologist E. E. Evans-Pritchard), was accompanied by the concurrent claim that "birds are twins" (reported on the same page of Evans-Pritchard's Nuer Religion). This controversial opinion amounted simply to establishing between two different "stuffs" a symmetrical relationship, typical of endogenous fields. The "anomaly" results from the fact that our modern mind cannot help but hearing here instead an anti-symmetrical inclusive relationship such as found in hereditary fields. Our tendency is to interpret "twins are birds" as meaning "twins are a sub-category of birds (alongside other things)", which is to our modern mind blatantly false. We're in no doubt that twins are a subcategory of human beings, belonging to the order of mammals; at the Vertebrate realm level, birds and mammals represent mutually exclusive orders where no species belongs to both; therefore if anything - like twins - has been recognised as a mammal, it cannot also be a bird. Understood as "twins have `birdness'" and "birds have 'twinness'", the Nuer's view - however unwarranted - suddenly gains a poetic aura, instead of offensive irrationality. In other words, in so-called "primitive thought" there is no "F"ness, only "G"ness. This emphasises that poetry is the favoured domain of G connections. One can imagine poets and literati in many languages struggling against the tyranny of F structures and their role in the routinisation of thought, in attempts to impose less constrained and more aesthetic alternatives. "Rêverie", whether in thought process or associated self-rewarding action, is the normal smooth functioning of G operators, and can be described in terms of "moving centre" operations, whereof the participant in a football game is an apt illustration.
Kung-Sun-Lung's paradox, mentioned in section 8, that "white horse" is no horse is a consequence of trying to compose higher level concepts through combining penetrable with impenetrable: "white-horse" becomes then a higher concept to both "horse" and "white", in the same way as generating "traction animals" through combining "ox-horse" or "nature" through "water-mountain". Such is precisely the only valid intersection in extended labels since they exclude one another at the level of exemplars but operate above the level of concrete individuals. This would however force to say "All horse is white-horse" which is obviously untrue, or "Some white-horse is horse", which is equally false. The play here, of course, is on the direction of the super-ordinate relation, one of which we have called the join (of extended labels) and the other we have called the meet (of sets of exemplars). Kung-Sun-Lung's against common-sense conclusion is that "white horse" is no "horse" (as super-ordinate it is above the horse, whereas common sense says horse is above, and white horse, a kind of horse). In this paradoxical structure, "white horse" is a more generic kind of horse, having abstracted the abstract properties of whiteness and the material properties of individual "horseness", so (individual) horses are exemplars of "white-horses", and there are other kinds of "white-horse" than "horse", so "horse" is no "white-horse". In an Aristotelian perspective, the penetrable "white" applied to the impenetrable "horse" is an accidental property of an exemplar of horse, it has no place in the definition of "horse" (only of "horse — [accidentally] white"); from this derives that penetrable and impenetrable — unlike impenetrable and impenetrable — cannot define a higher order concept. Kung-sun Lung sidesteps a full discussion of the colour issue, and the implications of "yellow horse"/"white horse" being impenetrable as individuals. As far as "horse"/"horse" intersection there is just "horseness" — no exclusivity. Graham translated and discussed the issue inadequately as a part/whole relation. His pupil Hansen does a more adequate job. He notes that every substantive is used in partitive usage. "Grain" rather than "The grain" with implication of bulkiness. To refer to "white horse" you say "Some white horseness" to get the exemplar. In other words, there are no proper Fs here except by agglutination. So by intersection of "Some white horseness" and "Some yellow horseness" you get "horseness". Not the union as we think, but the extended label intersection. Thus, while the Greeks are talking about the paradoxes of self-reference (e.g., the liar's paradox), the Chinese paradoxes are about meets and joins, intersection and union, as in our model of the F field.
In Chinese archaic thought, the character used to represent the stuff is part of it in the same way as any other of — we would say — its essential properties. There is no notion of any arbitrariness of the sign which is the key to effective "F"ness. Instead of creating ever higher level concepts through extending the generality of lower level ones by restricting their essential properties to those common to these labels below them in the hierarchy (while, in the reverse direction, exemplars are able to cumulate the essential properties of labels higher up in the Galois Lattice). the Chinese created higher up levels through the aggregation of exemplars, resorting to what we would call today an "extensional" strategy rather than an "intentional" one: instead of acting on the semantics through grouping of essential properties (garnering the "intention" or essence). the Chinese method acts on the "corralling" of exemplars (which remain therefore numerable, thus measuring the "extension"). Through defining, say a concept of "traction animals", by the juxtaposition of "ox-horse", the Chinese, instead of abstracting the essence of "traction-animal-ness" (the shared part of the properties belonging to "ox-", "horse", "buffalo", etc.), the Chinese create a larger class through the simple addition of "all ox" to "all horse".
At about the same time as Aristotle, Chinese thought illustrates a different principle (Granet 1934): a preference for rich symbols that generate a practically endless array of possible affinities, whose intersections as abstractions (starting from the idea of 10,000 abstract "sorts" or essences encoded in the Chinese characters) are made to play the role of abstract definitions. Here there is a potentially infinite possibility for intersection hierarchies — an F structure combinatoric of 2 to the 10,000th power of potential intersections (100,000,000 if just taken pair-wise) - along with what would suppose are increased difficulties or ambiguities in assessing implicit or expanded labels for abstract concept. Selection among those possibilities for intersection was not based on the notion of "natural domains" organised as taxonomies such as Genera/Species but on (1) intersections of implicit labels (e.g., Ox-Horse Animals of traction) playing off against (2) visual and aesthetic properties of elements in the character set itself used as principles of combinatoric creativity (Granet 1934: 52) and (3) oppositional pairs based on harmonies related to yin/yang, female/male, etc. (ibid 188; 125). Aristotelian categories such as Time, Space, and Number are not given independent play but instead are bound up in "harmonious" unities (ibid. 29) and are to a great extent absent from the grammar. Granet envisions a distinctive conception of the organisation of society (ibid. 27) and of the organisation of experience as the basis of Chinese categories (ibid. 29), rather than a purported mystical basis (ibid. 28).
12. The hereditary principle is historical: it allows syllogistic reasoning and amounts to the emergence of "reason" in history
The main distinctive feature between hereditary and endogenous fields is the fact that the former are anti-symmetrical and transitive, while the latter are symmetrical and intransitive. Once it has been said that "twins have birdness" and "birds have twinness" one has gone full circle and there is very little that can be added. On the contrary, the intellectual power associated to the hereditary fields once it has been noticed is staggering. The syllogism certainly meets its demise in examples like one I quoted above where from "violets are blue" and "blue is a colour" there is nothing to conclude. But in so many other cases the speaker will experience the elation that there is no end to the richness of including exemplars into labels and allowing the essential properties belonging to these to shower down. The contrasting principles of deepening abstraction on the way up and of extension to an increasing number of exemplars on the way down reveals a whole world of apparently fresh information through the constraints that the dialectics of quantifiers impose.
The sentence minimally composed of a word-pair and an operator linking the two is either symmetric or anti-symmetric. The Greeks called such clauses logos, but when the word was used in a more technical sense, logos referred specifically to the anti-symmetric type because of its disposition to call more. The bringing together of two logon for evocative purpose is the analogia, i.e. the proportion. When numbers or algebraic symbols constitute the analogia, the equivalent of the anti-symmetric clause is a ratio and this is why logos translates as "ratio" in mathematics. A mathematical analogia is what we still refer to today as to a "proportion".
If there is a common middle term to a proportion, i.e. if there are three terms on the proportion instead of four, it is said to be continuous (vs discrete, when there are four terms). In a continuous proportion the common middle term is a mean (meson). In the discursive mode, four figures of the analogia are possible, whether the ratios brought together "face to face" are both anti-symmetric, both symmetric, or one symmetric and the other anti-symmetric. If the analogia is discrete, it corresponds to what we now call an "analogy" and the Greek called a paradigm. It has limited uses such as drawing attention on homomorphisms between different domains and has therefore heuristic strength: also, parallel terms (major and second middle, first middle and minor) can stand for each other for evocative, figurative, use under the name of metaphor.
If the analogia is continuous, it allows a direct relation to be established between the major and the minor under the form of an informative "conclusion", and we are dealing here with the syllogism. [Aristotle covers only under the label syllogism three of the possible figures: when the associative chains are both anti-symmetric, when the first is anti-symmetric and the second symmetric and when the first is symmetric and the second anti-symmetric. In such cases, the inference is in the mode of the (literal) conclusion, i.e. demonstrative and informative (in the sense of bringing one signifier beyond the reach of its immediate constellation - set of possible associative chains - within a language's lexicon). Aristotle failed to see that the double symmetry figure allows also a conclusion although at first sight a less informative one.] The anti-symmetric nature of the logos properly so called encourages the concatenation of others: any clause acts as a pointer to further chaining. This potentiality soon raises the question of the compatibility of subsequent clauses as it can be observed that sense may diverge so that speeches starting from the same premises turn out to be incompatible, i.e. claim states of affairs which are contradictory.
The Greek proposed different types of solutions to this intellectual difficulty, to be used on their own or in a combination: through division, formalisation or ostensive fit.
1) Plato advocates (or simply reports, the issue is unclear) the method of division or dichotomy, which consists of solving a question through consecutive choices between alternatives. The difficulties involved are those of ensuring that one is dealing at every stage of division with an exhaustive contrast set of only two terms. Aristotle noted the underlying assumption — which he regarded as unwarranted — that any classification amounts to a dichotomous tree and ends up out of necessity with a number of labels being a power of two. He supported however the use of the division method as a useful tool for the purpose of definition.
2) The question of compatibility can alternatively be faced head-on. If one defines the possible joints between subsequent clauses so that starting from identical premises it is only possible to reach identical ultimate statements (whatever the length of the concatenation of clauses). This leads to drastically reduce the number of operators (within the clause) and joints (between clauses) to those that can account for a "truth table" (truth being here understood in a manner independent of the content provided in the sentence by the predicate and subject). Thus an embryo of formal logic is being developed but also an axiomatic mathematics.
3) The question of compatibility can also be approached indirectly. Let each associative chain be submitted to the test of one external criterion so that if it successfully passes the test one is guaranteed that no two sequences of clauses starting from the same premises will turn out to be contradictory. The method consists of assessing the truth or falseness of each individual associative chain. The way to ensure the truth of a clause is that it depicts the world as it stands, i.e. that the words match the state-of-affairs. There would be two ways to say the truth: asserting the truth of what is true and the falseness of what is false. [The first historical account we're having of this suggestion is in Plato's Sophist:
"Stranger: A sentence must and cannot help having a subject.
Theaetetus: True.
Stranger: And must be of a certain quality.
Theaetetus: Certainly. […)
Stranger: I will repeat a sentence to you in which a thing and an action are combined, by the help of a noun and a verb; and you shall tell me of whom the sentence speaks.
Theaetetus: I will, to the best of my power.
Stranger: `Theaetetus sits'— not a very long sentence.
Theaetetus: Not very.
Stranger: Of whom does the sentence speak, and who is the subject? that is what you have to tell.
Theaetetus: Of me; I am the subject.
Stranger Or this sentence, again […] "Theaetetus, with whom I am now speaking, is flying."
Theaetetus: That also is a sentence which will be admitted by every one to speak of me, and to apply to me.
Stranger: We agreed that every sentence must necessarily have a certain quality.
Theaetetus: Yes.
Stranger: And what is the quality of each of these two sentences?
Theaetetus: The one, as I imagine, is false, and the other true.
Stranger: The true says what is true about you?
Theaetetus: Yes.
Stranger: And the false says what is other than true?
Theaetetus: Yes.
Stranger: And therefore speaks of things which are not as if they were?
Theaetetus: True. [...]
Stranger: When other, then, is asserted of you as the same, and not-being as being, such a combination of nouns and verbs is really and truly false discourse.
Theaetetus: Most true" (Plato, The Sophist).]
Truth would obtain when the clause depicts the world as it stands. This raises in turn the question of how to assess that the words fit the state-of-affairs. The potential for perceptual illusion (phenomenon) prevents from simply comparing what is being said with what the world looks like, as this can be deceptive. This latter difficulty can be dealt with on one condition: that is supposed beyond the world of appearances another world, more sturdy, exempt from deceptions, such that truth can be judged with respect to it. Such an assumed world amounts to an objective reality.
Plato and Aristotle come up with divergent views about the nature of such an objective reality. For Plato, it would be the world of ideas, for Aristotle, the world in potentiality. In Plato's view entities of the phenomenal world actualise in an approximate imperfect fashion the ideal and atemporal Forms of the world of Ideas. In Aristotle's system the phenomenal world is the historical, susceptible to corruption, actuation of a world in potentiality whose elementary bricks are the genera. Thus in the world in potentiality an egg, a chick and a rooster are one and the same thing while in the world in actuality they are distinct objects. Aristotle would add that the celestial world is immune to corruption and therefore perennial under a form which remains identical to itself.
The description and explanation of objective reality would from then on define the programme of the scientific enterprise. Within it the two methods for ensuring a priori the compatibility of subsequent associative chains would be combined:
1) the fit between the word and the thing would be attained through the systematic resort to experimentation,
2) while the overall compatibility of statements would (redundantly) be guaranteed by the modelling through mathematics and formalised logic.
III. Dynamics
13. The skeleton of each speech act is a path of finite length in the Network
Acoustic imprints, individual elements of the Network hold a static topological property emerging from their location within it, commonly called "meaning". This meaning is nothing more than, in set theory terms, the union of all word-pairs these words can be part of with their immediate neighbours in the Network. Our view hardly differs on this subject from the one held by Niels Bohr, the physicist who played a leading role in the development of quantum mechanics. Here, from a book by Edward M. MacKinnon. " [following Bohr] ... clarifying the meaning of a word involves showing how it relates to other words through logical structures somehow implicit in language. Any attempt to consider a term in isolation from such logical structures as having a precise meaning is essentially misguided. In a particular context an analysis of a term's meaning focuses on some aspects of the network of logical relations connecting this term to the body of language [.. .] It follows that the language we use to describe the world as experienced is not susceptible to any sharp all-purposes distinctions between content and logical form. Any form-content distinction is invariably context-dependent" (MacKinnon 1982: 354-355). In an interview, Heisenberg reported the following: "Bohr said that when any word is produced, that word raises something into the light of consciousness, and at the same time it raises many other things which are only in a shaded light and are almost entirely covered, and all these things enter a consciousness at the same time" (ibid. 355). Individual paths, concatenations of word-pairs, on the Network hold "logical" propensities, a consequence of such elements being elicited along a path in interlocking pairs, one pair at a time. Thus a response in a S — R perspective of speech performance might be surprisingly un-crude when envisaged as the pulling of a string of elements from the Network which one language's lexicon constitutes. What decides precisely of the path taken in any instance and why the unfolding process ends when and where it does (making each uttered string of a finite length) are, the affect values attached to the word-pairs in the Network.
14. A speech act is the outcome of several "coatings" on a path in the Network
A path in the Network amounts only to a set of word-pairs, linked through the word that subsequent word-pairs possess in common. Say. "milk-white", "white-mist", "mist-moist", "moist-Autumn". [Aristotle: "The cause is that they pass swiftly in thought from one point to another, e.g. from milk to white, from white to mist, and thence to moist, from which one remembers Autumn (the 'season of mists'), if this be the season he is trying to recollect" (Aristotle, On Memory and Reminiscence).]
Then these go through a number of "coatings", ending up in sentences as we're familiar with. The image of coatings is fitting, like a confectionery is dipped in chocolate, candy, frosting, etc., although the true mechanism is closer in our view to this: the skeleton of a speech act that the word-pairs provide grows into a multi-dimensional space through various accretions, it is the sequential nature of speech that reduces ultimately this structure to a linear process, like the rolls in old-fashioned washing machines that allowed to express most of the water out. This view of sentence-generation as the "flattening" of a multi-dimensional structure was initially developed by the French linguist Louis Tesniere in his "structural linguistics" (Tesnieres 1982; Jorion 1996).
The terms which undergo the coatings, the members of the word-pairs embedded in the Network are, as we've seen of two types: the categoremes that stand for universal terms, which we've called "labels", such as "elephant", and the demonstratives standing for "exemplars", the latter being "proper nouns", such as "Oscar", demonstratives like "this", or pronouns like "I" or "you", these two being what linguists after Boas and Jakobson call "shifters" (Boas; Jakobson).
2. The first layer of coating is where relationship terms are fitted between the two terms of the word pair in what can be regarded as a word-pair expansion. [This is Kant's "judgement"]. These terms are of different types. If the relationship is that of a hereditary field, our F structures, then the words introduced between the halves of the word-pair are some equivalent or other of "is a". In this kind of usage, "to be" is usually referred to as a "copula". Such clauses, as we've seen, are only then valid in terms of their truth if the proper quantifiers are applied, like the prototypical "all" or "some".
3. If the relationship belongs to Aristotle's ten categories, the G structures or endogenous fields, then the range is more varied, and depending on which, it will be expressed for "milk-white" as "milk is white", "Alexander-horse" as "Alexander has a horse" with "to have" in a copula-like function, "Oscar-London", "Oscar is in London", "tower-midnight", "the tower at midnight", etc.
4. In English the relationship "has a" can be expressed through the genitive, with "Alexander's horse" instead of "Alexander has a horse" or "the horse of Alexander". "Alexander's horse" reverses however the focus from the Bucephalos to Alexander.
5. Sometimes, like in the genitive, there is no additional term between the two members of the word-pair as one of them is turned into an adjective or into a verb. "Violet-blue" becomes "the blue violet" or "the blue violets". As can be seen in this example, it is the article "the" in such cases which is the additional term indicating the relationship in the word-pair expansion. "The" just as well as "blue" expresses the determination of "violet", it can act at the same time as a quantifier, as in the contrast between "the blue violet" and "blue violets". The term used when denoting such terms as "the" or "blue" in the linguistic analysis of Chinese is "determinant", and it is therefore the term we will retain as well. Verbs are used also as determinants: "elephant-fly" can lead to "the elephant is flying", "the elephant flies", "the flying elephant". Verbs allow to cumulate a number of Aristotle's categories simultaneously: it tells about "acting" or "enduring", location in time, providing also a rough view on quantity: "elephants were flying", tell about "acting", "position", "quantity" and "time". Determinants can also be expressed in English through other speech parts like adverbs, numerals for quantity, or words like "tomorrow" for the expression of time or "over there" for location, etc.
6. Types 2 to 5 allow to render word-pairs within the field of Aristotle's categories. Further word functions are needed to ensure broader speech act units like whole sentences. First is the "recaller" or "anaphor" which refers to a member of a word-pair without mentioning it again explicitly, like with "he "who", "that", etc. The anaphor, is quite revealing of the working of short term memory, and, as we hold, of the affect dynamics underlying speech performance. Indeed, it seems to us that only the very precise and lasting affect value of terms used some time earlier within a speech act allows to refer to them through very indefinite markers such as "her" or "which".
7. Generally speaking, because of the various possible concatenations between F and G. syncategoremes belong to four types: "F/F", "G/G", "F/G" and "G/F".
An elementary path connecting two word-pairs through a Network's word-space belongs to one of the following types:
1. f: upwards within an F sub-structure: from exemplar to label,
2. f-1: downwards within an F sub-structure: from label to exemplar,
3. g : within a G sub-structure.
Concatenating an f or an f-1 with another f or f-1 calls between them an "F/F" syncategoreme. Concatenating an f or an f-1 with a g requires an "F/G" syncategoreme. Concatenating a g with an f or an f-1 requires a "G/F" syncategoreme. Concatenating a g with a g requires a "G/G" syncategoreme.
Concatenations of f and f-1 elementary paths generate syllogisms. f and f-1 path elements belong to the part of logic which Aristotle called analvtics and constitute the discourse of science (episteme). g elementary paths belong to the part of logic which Aristotle called dialectics and constitute the discourse of opinion (doxa).
Prominent among terms allowing to constitute larger speech act units than simple word-pair expansions are the so-called "logical connectors" like "and", "or", and "if... then…". The reason they are called "logical" is that, defining the scope of "first order logic", these allow to make mechanical truth tables, allowing to predict the truth of combinations of smaller units whereof word-pair expansions would be typical cases. If I say "violets are blue and penguins fly", this is mechanically false as a combination through "and" requires to be true that both branches are true; if I say "violets are blue or penguins fly", this is all right because a combination through "or" is true if at least one of its branches is true in the case of the "inclusive 'or'", or, in the case of the "exclusive 'or'", is true if only one of the branches is true, both types of 'or' would be satisfied with our illustration.
8. Of a similar nature as the logical connectors, are these "compatibility connectors" that we discussed on the example of "nonetheless". There is a whole range of them, from the confident like "because", through the tepid like "thus", to the despondent like "despite". We referred to the latter as "contradiction insulators" or "compatibility patches". These are evoked when precisely the word-pair expansions brought together are in no position to have their fate linked with the likes of "and', "or", and especially not the "if.. then..." of implication. Then some word needs to act as link, and as we said brands the link or, in the worst of cases, "defuses" the feeling of a potential contradiction.
9. Of a related nature are what we call "continuity connectors". Again, a whole range, from "then" and "next", which signals both time and space contiguity, to the unease that might arise, not this time from a potential contradiction but from a lack of continuity, which can be in time: "In the meantime..." or in space: 'Meanwhile back in the jungle...", or in topicality as with the archetypal "Anyway..." .
10. "Final touches" are exerted by a variety of functions like Malinowski's "phatic" markers (Jakobson...) used to maintain the listening party's attention like "hello, hello!" or "y' know what I mean?" .
11. Also, "highlighters" which aim at emphasising what is being said: "a movie that's real good". In "a film which is really good", "really" is a determinant of good, it specifies how good the film is, while in "a movie that's real good", "real" is a highlighter. In his plays David Rabe is a master of highlighters: "You know what this is, Mickey? This is goddamn PARA-fucking NOID" (Hurly-burly).
12. Finally, the "assent markers" which gauge for the benefit of the listening party in what degree the speaker identifies with what s/he states. These markers may be blunt like in "It is true that I am seldom wrong about the weather", "It is a fact that the Earth is flat", but they can be subtle, like in "I simply cannot visualise what So & So is claiming" (has been seen in Behavioral and Brain Sciences). On a more negotiating, or conciliatory mode, assent markers favour resorting to "knowing" and "believing": "Teacher, I believe that 11 and 15 rather make 26 (... instead of 36, as you just said)". [Some logicians refer to statements beginning with "I know..." or "I believe..." as belonging to a particular type of logic, which they call "epistemic" meaning that such phrases refer, as they claim, to particular states of knowledge. This is a tragic misunderstanding about how such phrases are used. "I know I'm not always right on these issues" does not express any quality of knowledge, it means "OK, I messed up in the past" (see such arguments developed in Jorion & Delbos 1985).]
Some additional aspect of these functions deserve to be mentioned.
Firstly, as we've seen, the functions enumerated here make short shrift of the traditional division in speech parts: the pronoun « I » is mentioned as a demonstrative, the pronoun "he" as an anaphor,"is" ensures connection of the members of a word-pair within hereditary fields as a copula, as in "Puss is a cat"; it also ensures connection of word-pair terms in endogenous fields where "a violet is blue" stands for "a violet has blueness"; it can also act in its own right as a verb, i.e. a determinant like in "Socrates is". [Emile Benveniste was the first to draw the attention to the absence of unity in the functions performed by personal pronouns (Benveniste 1964:…).]
Also, as is clear from the previous examples, any of these functions can be fulfilled by any arbitrary number of words. For example, will operate as a single demonstrative, the following: "This song which I'm sure you remember, beginning with 'I wish you blue birds in the spring'...". Secondly, these functions are variously located as far as the relationship between speech, the empirical world and the speaking subject are concerned: some cover relations between words and the empirical world, some between the words themselves, while some others, the relationship between the universe of words and the person of the speaking subject.
1. Demonstratives establish a direct link between words and objects in the empirical world. Their function is deictic, i.e. to establish a bijective relation between a word and an object. Contrary to a commonly held assumption, they constitute the only type of words having an actual referent (significate in medieval linguistics) in the empirical world contrary to what the extensive conception of logic assumes, universals do not refer to a precise collection of exemplars. "Elephants": alive? Alive and dead? That will ever be? Etc.
2. Categoremes stand for objects likely to be monstrated. Their use is however cut off from any actual monstration. Categoremes function only in relation to each other as being part of the Network.
3. Determinants restrict the extension, the number of instances (the "world applicability") of demonstratives and categoremes in one of the dimensions that Aristotle described as one of his ten categories.
4. Anaphors, as they establish a bijective link between a demonstrative or categoreme mentioned an earlier and a new instance of it, act somewhat like demonstratives but this time entirely within the discursive world.
15. The dynamics of speech acts is a gradient descent in the phase space of the Network submitted to a dynamics
We've recalled the psychological experience of the talking subject when he or she speaks. There is no doubt a certain type of satisfaction that derives from observing the consequences of one's speech: to see at work the "power of words", when, without making any other motion than moving one's mouth, one can have a quite distant window opened, when saying "Can you, please, open the window?". At the same time, there is, even more immediate, the satisfaction of having spoken, of having said what one felt like saying. As if the words uttered had been, before being uttered, the source of an inner tension that got relieved through the very fact of the speech act. Whether it is through simple self-expression or through consequences of the speech act, the principal benefit is in regaining composure, equanimity of mind through having said it: to recover a mind once more serene. These are indeed the subjective emotions which a human being undergoes. From a subjective point of view speech performance is essentially cathartic, it aims at psychological relief , objectively speaking it might be more appropriate to say, "speech performance results in psychological relief'. It is tempting therefore to come up with a physical model that would not only suggest the operation of a plausible mechanism, and at the same time show why the psychological experience accompanying speech performance is the way we subjectively know it to be. This sounds so much like the straightforward scientific pursuit that one may wonder why it sounds awkwardly innovative. [An isolated example is Freud' Esquisse d’une psychologie scientifique 1895, in Freud 1956, 307-396". Pribram and Gill devoted a detailed analysis of Freud's unpublished manuscript in their PRIBRAM, K.H. & GILL, M.M.,1986, Le « Projet de psychologie scientifique » de Freud : un nouveau regard, Paris : Presses universitaires de France 1976.]
The main reason, according to us, lies in the pervasiveness of the folk psychological model of speech performance to which the so-called "functionalist model" view, which is just a repetition of it, has given a new life. This, the current epistemological creed of the cognitive sciences, borrowed from the late nineteenth century philosopher Brentano, can be sketched in the following manner: consciousness is the faculty materialising an intention into the action which this intention is aiming at; an intention is determined by a desire which itself is grounded in a justified belief: desires and beliefs are mental states corresponding to specific material configurations of brain cells (as summarised by Searle 1997: 44). That none of the conceptual entities required by the model has ever been shown to exist seems not to have ever hindered its popularity, even in educated circles.
The relaxation that the subject experiences may very well reflect the physical relaxation taking place in the Network when a particular path is travelled. For a relaxation to take place it is only required that within a possibility space, such activation equates to a gradient descent leading to a potential well acting as an attractor. Then relaxation takes place, i.e. the charges on the memory network are recomposed into a local minimum. [Aristotle: "thinking has more resemblance to a coming to rest or arrest than to a movement" (On the Soul, I, iii).] This possibility space is that of the word-pairs in the Network with their accompanying affect values. Whatever motive (we'll come back to this below) triggers speech performance in the talking subject, he or she reveals what chord has been struck and all there is within his / her Network that can thus be elicited as a response.
An intriguing aspect of the affective dynamics which we suppose underlies the Network is that it can alternatively and as effectively be regarded as a relevance dynamics. Saying that a talking subject produces to a stimulus speech acts which bring the highest satisfaction or seem to him / her most relevant, is to all purposes equivalent.
The model to be considered is that of a Network made of word-pairs, each with a particular affect value to it. The concept of affect value was introduced by Freud: "... among psychic functions one needs to distinguish something (a quantum of affect, a sum of excitation) that possesses all features of a quantity —although we have no notion of how we would measure it -, something that is likely to rise, to diminish, to get displaced to get released, and which is distributed on the memory traces of representations, in some way like electric charges on the surface of bodies" (Freud 1894). In addition there is a potential associated to the Network, similarly to an electrical circuit, in such way that there are constraints to what values neighbouring word-pairs can hold. As psychoanalysis has shown (we saw earlier Niels Bohr holding similar views) if I feel in a particular way about "Eve's apple", then the affect value of "apple of my eye" will be to some extent constrained by this. The principle of the method of "free association" is that working through the less highly charged parts of the Network it will be possible to get nearer — and possibly "free" —specific paths that unpleasant associations have "censored" on the Network (we'll get back to this in section 20).
Some stimulus having acted as trigger, a talking subject embarks on a speech act that leads to a string of sentences of variable length. The trigger has decided of what is relevant, and the subject then unfolds what brings him or her the highest satisfaction when saying. Having done so the subject interrupts his / her speaking and stays quiet. The relaxation is short-lived as there exist an overabundance of possible external or internal events likely to re-introduce imbalance.
The beauty of it is that the talking subject hears him / herself talk which may elicit in him / her (Network) new points of relevance, activating the Network in additional places. The experience is familiar of stating something and in the process, these little lights come up: "Oh! This also deserves to be said about it!". In such a way that one may experience being somewhat overwhelmed with more things pressing to be said than one can utter. As the listening party is undergoing the same process, the talking subject may end up stuck with a wealth of relevant things to say, that will never be uttered. The feeling of being stuck with things to say that have not been allowed to come out, is particularly uncomfortable: some equilibrium of the affective charges on the Network is in need to be regained "through natural means" as would happen in a process like « annealing".
From any starting point on the Network, from any most relevant word-pair, there is more than one possible path that leads out. The one that is taken is the one that is the most pertinent, i.e. the one that leads in the speediest way to the satisfaction of having reached a potential well. In terms of the gradient descent, the most satisfactory, i.e. the most relevant path from the subject's point of view, is thus the one that will be doing the job in the shortest time, that is: the path with the steepest slope. Practically, at each bifurcation, that is at every connection between a word-pair and all the other pairs one of the word is part of (all its "usages", in other words, its meaning), the choice will be made according to the gradient descent' s "gravity" principle of the steepest slope. The choice will be the most pertinent in context), i.e. the bifurcation that brings the most satisfaction for being passed through. Some channelling of the relaxation process is imposed by the tongue itself but at any bifurcation, at any location where there is a choice to be made between say various nouns, verbs or adjectives, the leading principle is that of the gradient descent: which of the word-pairs is likely to make the highest contribution to the current task of relaxation?
Relevance is largely contextual, or maybe more aptly, interactive, although, depending on the individual, the affective dynamics may be more or less insulated from external feedback. We're all aware of individuals who always talk of the same subject, whatever the circumstances and whoever forms their potential audience.
A gradient descent model avoids the pitfalls accompanying alternative models implying goal imputation (Jorion 1990a: 94-97; 1994: 94-98: 1997: 3-4). In fact the gradient model applies to speech just as to any type of behaviour. A subject, its history stored as memory, and an environment, constitute together a single possibility space where behaviour constantly aims at minimising a dissatisfaction level. A. framework for behaviour is thus provided, replacing final causes (targets) by efficient causes in a gradient model where intentions (and worries) constitute potential wells. Within such framework any sequence of animal and human behaviour can be modelled as a frustration/satisfaction gradient descent within the individual's (animal or human being) potentiality-space as determined by its knowledge, inventiveness and current state of its surrounding universe. At each stage of the gradient descent the individual experiences a progress from the local point currently attained in the potentiality-space so as to make the rate of frustration reduction the highest possible.
As opposed to what we have just described in terms of apparently responsive behaviour being driven by a relaxation descent gradient, all prevailing models of thought processes suppose a voluntary and conscious act of speech and thought generation.
16. The utterance of a speech act modifies the affect values of the word-pairs activated in the act
The utterance of a speech act modifies the affect values of the word-pairs activated in the act. Once again this can be accounted for alternatively in terms of affect values or relevance. Associated to any speech performance there is a reward and punishment system. If one makes a fool of oneself every time one tells a particular story it is likely that the inclination to tell it once again will tend to diminish. Conversely, any speech act that encounters a high degree of approval will be thus encouraged to be repeated in a similar context. This is nothing but the logic of Hebbian reinforcement at work. Gratification and relevance go this way hand in hand. To the extent that speech performance ends up in the satisfaction that accompanies relaxation, approval of one's views leads in an increasingly direct manner to satisfaction. This is nothing but the dialectics of recognition that Hegel describes in the Phenomenology of Mind (1807).
17. The gradient descent re-establishes an equilibrium in the network
Sentences addressed to the system "arouse" it emotionally, i.e. raising the affect value of the concepts involved and their neighbours in the network. The system responds through speaking which removes the newly created excess. Expression restores affect values to their initial level: prior to the arousal. As described in section 16, Hebbian reinforcement entails that affect values are never restored to their exact prior level: relevance is being rewarded.
What the Network responds reveals what chord has been struck. In other terms, the response expresses what the stimulus (whatever triggered the speech act) "meant to" the Network. The speech act ensures relaxation through the creation of a gradient path leading to a potential well through the Network's potentiality space. Satisfaction obtained through speech leads to a fugitive equilibrium, easily upset and requiring soon enough the outburst of renewed speech performance to have it relax once more.
18. Imbalance in the affect values attached to the network has four possible sources
A possible analogy to the Network's dynamics is provided by the pin-ball machine. The ball is initially shot to the top of the incline, then gravity initiates its gradient descent during which it is likely to hit a large number of intentional obstacles, adding points to the user's account, until it gets within reach of the player who can then manipulate the flippers in order to shoot the ball back to the top of the incline. The number of obstacle, the shapes of the surfaces hit, as well as the slight variations in the impetus given to the initial shot, all contribute at making the path, although undoubtedly deterministic, unpredictable. The game has been set-up (and there are an infinity of possible displays) in such manner that the potential for variation in paths has been maximised. In physical terms, the system has been set up so that the Lyapunov coefficient, measuring potential for divergence in behaviour, is large.
18.1. Speech acts of an external origin, heard by the subject
In the case of speech performance, the initial impetus has several possible sources. One obvious one arc the speech acts of someone else than the speaker. The topic raised by the other party's conversation provides relevance to the subject within the listener's Network and the dialogue is thus initiated from the place where this is located. The closer the resemblance between the Networks the more likely that the dialectics of the conversation will adopt a Ping-Pong nature, and last for a while, the tendency to synchronise being in this case high.
18.2. Bodily processes experienced by the speaking subject as "moods"
Another source of input is the talking subject's own body. "Moods" are those variations in affect values (often of purely metabolic origin) which lead a human being to "tell what s/he feels" even unprompted. Moods constitute for the Network a disposition to respond in a particular manner to stimulation: through them affect values are raised or lowered in a global manner. The phenomenon has a noticeable "positive feedback" nature: we get sadder because we hear ourselves tell a sad story. In "rumination", every time we hear ourselves (most often silently) toy with the same annoying thought, every time — the thought being so upsetting — our frustration gets actually increased, as if further propelled by its own momentum. In the absence of the cybernetic loop of "hearing oneself speak", "rumination" would remain unexplained.
18.3. Speech acts of an internal origin: thought processes as "inner speech" or hearing oneself speak (being a sub-case of 2.)
Mood will also initiate the particular type of conversation that is inner speech, i.e. the part of the thinking process which is identical to silent speech performance. Plato had noticed that the central part of what we call "thinking" is nothing more than our inner hearing of sentences produced inside ourselves, in what we call our "imagination". Here in The Sophist:" The Stranger: Are not thought and speech the same, with this exception, that what is called thought is the unuttered conversation of the soul with herself? -Theaetetus: Quite true. — The Stranger: But the stream of thought which flows through the lips and is audible is called speech? - Theaetetus: True." (Plato, The Sophist). The gurgling of one's stomach, for example, may be sufficient to start an inner monologue. This is what makes us human beings: we don't perceive hunger directly, we hear instead the inner voice saying "I kind of feel hungry...". With us, the Word is indeed truly at the beginning. Although our perception is just as that of any other superior mammal: we, the speaking mammal, are faced with the concept before with the percept (acute pain might be an exception).
In this respect, what is central to the dynamics, and to our understanding of it, is the cybernetic loop mentioned above. In the same way as dialogue re-launches constantly the gradient descent through having the ball re-thrown in a new part of the Network, inner speech can do so in a self-fuelling manner also. The crucial element here is the short time lag that allows the talking subject to hear oneself (even silently) evoke such and such a topic, and lead to more associations. What we hear ourselves saying re-launches the emotional dynamics underpinning our speaking, just as do sentences we hear when others utter them. The outer or inner ear registers speech uttered by oneself for the first time when it is uttered (one hears oneself speak at the same time as everybody else does), this leads to modification of the affect landscape. The cybernetic loop is not so much a question of the topics being evoked as one of the affective process generated in the wake of the speaking. The fact is that one may become increasingly elated at hearing what one is saying what one says, but not necessarily so. We might be pleased or displeased at what we are hearing ourselves saying.. We may very typically feel embarrassed. For the purpose of simplification, our reaction to what we hear ourselves saying is of a dichotomous nature: it is either fear or it is aggression, i,e. an inhibitory reaction or a reinforcing one: a tendency to stop oneself or be encouraged to say more of the same. [What linguists have labelled pragmatics is essentially the phenomenology of such inhibition and reinforcement. The speaking subject modulates his/her assent to the very statements s/he utters (see above in section 14.12, the assent markers).] The fear or the elation one experiences at hearing what one is saying then fuels what one will say next.
We’re not the first to have noticed this cybernetic loop, and its role in the self-fuelling process of thought as inner speech. It is curious however that the authors who have, haven't drawn any significant conclusions from their observations. Patricia Churchiand writes: "Some of my thoughts seem to me to be a bit like talking to myself and hence like auditory imagery but some just come out of my mouth as I am talking to someone or affect decisions without ever surfacing as a bit of inner dialogue" (Churchland 1996: 404). The French philosopher Maurice Merleau-Ponty wrote in the early 1950s: "... my own words surprise me and they teach me my own thoughts [...] To express, for the speaking subject, is to become aware; he not only expresses himself for others, he expresses in order that he understands himself what it is he is aiming at […] Ourselves, who are uttering, we do not know necessarily what it is we are saying any better than those who are listening to us" (Merleau-Ponty [1951] 1960: 111, 113, 114).
Aristotle makes us still frown with his insistence that the soul is "automatic", self-driven, and subject to self-motion (ref.), but this is precisely what the cybernetic loop entails. Also, the ease that the F hereditary fields offer for ever unfolding concatenation of word-pairs, has encouraged the perennial trend towards ever-increased talkativeness. [The modern mind has become increasing agile at developing an autobiographical narrative where every event makes sense in the unfolding realisation of a well-planned personal project.] But the self-fuelling nature of the cybernetic loop provides speech performance with a momentum. A common sense view of such momentum is, understandably, that it does not derive from the sophisticated dynamics of a self-fuelling process, but derives all from an impulse conceived at the very beginning. Here lies no doubt the source of the folk-psychological view that speech acts are the outcome of the "intention" we have of saying what we say. This view leads of course to infinite regress, with a need to postulate an "intention of having the intention" or, specifically in the case of speech performance, the necessity of supposing a "proto-language" of thought processes having its own "proto-grammar", etc. Libet has shown conclusively in his works that the "intention" is a post hoc representation, the subject's apprehension of the psychological phenomenon of the "intention" being located in time a full half second after the action has taken place that the "intention" supposedly "intended" (Libet....).
18.4. Empirical experience
How two things are brought together within a sentence may happen in two types of ways: either words already associated within the memory network are evoked in unison as part of a speech act, or it is a percept which brings together words that were so far unconnected at the memory network level. "Roses are flowers" is, except for the very young, stored in an individual's memory network. "This rose is black" might be a new experience and be elicited at the sight of what may have come as a surprise. A percept needs not necessarily result from the raw operation of the senses, it might also involve the symbolic processing of sentences heard or read: two more ways for acquiring new connections between words, i.e. for learning. These two modes of operation: retrieval of stored material and addition of novel material reflect the fact that there are to the dynamics of the memory network, two entry points: the "labels" on one side, the "exemplars" on the other side.
19. In the healthy subject each path has inherent logical validity; this is a consequence of the topology of the network
The reason why in normal circumstances speech acts display automatically a valid logical structure is by now obvious. Data ("content words") in the Network are organised within hereditary (F) and endogenous (G) fields. In most cases an elementary "coating" of paths along these lines provides valid syllogisms. ANELLA was able this way to generate syllogisms the length of which was only limited by the extent of its vocabulary (see Jorion 1988). One such example would be the user entering: « Who has wings ?" and ANELLA replying: "a bird has wings", "a parrot is a bird", "Polly is a parrot", "THEREFORE" "Polly has wings".
The limitation to such potentially infinite generation would come with cases, as were mentioned, where the first premise is of type F: "violets are blue", and the second of type G: "blue is a colour" . It was however my uncanny experience that ANELLA didn't stop at this difficulty. As soon as ANELLA' s programming allowed it generate such sequences it produced the following: "Rex has fleas", "fleas is an insect", "THEREFORE" "Rex's insect is fleas". ANELLA had used its very simple ability to construct genitives to solve in a pleasant manner what I had regarded as a potentially insuperable obstacle. Of course "Rex's insect is fleas" is no conclusion of a syllogism, it is nothing more than an alternative way for connecting two terms at distance "2" on the Network through a third one acting as a stepping-stone. Only here the middle term, the "reason" of the syllogism, is not discarded but retained. Similarly, ANELLA generates with "violets are blue", "blue is a colour", "violets' colour is blue" (Jorion 1988).
20. Neurosis results from imbalance of affect values on the network preventing normal flow (Freudian "repression")
The association of elements in the Network with affect values, and the dynamics of speech performance as being dictated by such affect values, is necessarily open to some pathology.
(i) What if affect values are so distributed that the talking subject cannot help but coming up with the same topic as being most relevant in all circumstances: wouldn't this sound like obsession to the listener?
(ii) What if the affective landscape were so flat, that a potential well would be difficult for the speaker to attain. Wouldn't this amount to logorrhoea?
(iii) What if the affect values attached to some words (or better, word-pairs) were such that the talking subject is unable to pronounce them? Wouldn't it force the talking subject to find alternative, detoured, possibly meandering, ways to express the thought anyway, i.e. to find a way to travel the path between one word and another? Wouldn't such blocking equate with "repression". [Freud: "Impressions reconstituted..." (Freud 1917; Jorion 1990: 74). Also: "If one explores..." (Freud 1895; Jorion ibid. 74)]. Wouldn't the outcome amount to neurosis as Freud characterised it: as a self-inflected taboo on particular thoughts? Freud wrote about this: "Material associations (assonance, double entendre, coincidence in time without any more meaningful connection) dominate due to the pressure that censorship exerts, and not because representations would be absent. In figuration, such material associations show up instead of the meaningful ones when censorship prevents these from being used It is just like when some flooding has rendered the best mountain roads unusable: it remains possible to circulate, but only through the precipitous and difficult trails that only hunters would use in ordinary circumstances" (Freud 1900; Jorion 1990: 75).
The image of a "smooth functioning" moving centre, rooted on dynamic feedback about relative degrees of success, is vulnerable to frustration when success feedback is not within tolerable limits.
The hierarchical structure of hereditary networks, whose precise decomposition consists in the invariant possibility of unambiguous outputs from a given input is vulnerable to dysfunction in quite a different way. It is not the judgement of success or frustration that is derailing of normal functioning but ambiguity, indeterminacy, inability to act, think, feel in a consistent way: hence the vulnerability is to paralysis.
In the lattice model of hereditary networks, input signals from "above" into a set of activated nodes result in their convergence to a unique node "below" that is the output signal. Global uniqueness is guaranteed by the local property of a unique intersection (downward meet, upward join) for every pair of elements.
The disruption of the lattice model, then, is when the neural pathways are networked so as not to result in a unique meet or join. Disturbance — potential paralysis because of ambiguous outputs — results because the network is not self-organised as a lattice. The following diagram is an example:
A B
| X |
C D
Here there are two meets of A and B: C and D. There are dually two joins of C and D. Lack of uniqueness "downward" and "upward" are co-terminous.
As a purely cognitive problem, lack of uniqueness does not present a problem, since choice or random selection is possible between alternatives. A problem arises, however, when the alternative, say C and D, are surrounded by "outside" interference that we would identify with approach-avoidance. Say that C is multiply connected, through endogenous (G) operators, to one block of self-reinforcing ('approach') signals, and to another block of self-dampening 'avoidance') signals. This may effectively 'paralyse' a decision about C. If D offers a viable alternative, a new 'unique' pathway may be etched to resolve the problem of paralysis. But if all the alternatives such as D are connected to approach-avoidance ambivalences, the stimulus-response circuits are thrown not into a consistent thought/action loop but a fearful or anxious state, and no new pathway is etched to resolve the original ambiguity. Every time the triggering stimuli arc encountered, paralysis may result. Again, the existence of the problem is dependent on an interplay between F and G operators. or "emotional" conditioning of cognition.
It is precisely the "evaluative" component of interplay between F and G operators that gives a clue as to how hereditary lattices arc structured. The intersections chosen are at every level reinforced by positive (approach) or negative (avoidance) loops. This leads to the following observations:
to the extent that "choice" is absent — and there is a just sufficient match between G connectors into hereditary F structures to activate unique output responses — and the hereditary lattice acts as a perfectly rational bureaucracy ("sorting" input conditions into output responses as if an unambiguous set of rules operated) there is routine, lack of innovation, and a lack of explicit affective to recondition choices.
2. that choice is present implies "over-connection" of G operators entwining into F hereditary lattices, and the possibility of ambiguity or non-unique outputs, reconditioning, innovation, and, of necessity, affective states of cognition. Innovation consists precisely of taking new combinations and their associated risks: that is to say, not the combinations with known outcomes but reconditioned possibilities.
Once there is choice and risk, there is the possibility of regret. Regret appears as a G-connected signpost in the hereditary F lattice that generates lattice violations for alternative intersections: postings that say, in effect, "not here: try another." If these are not circular then they lead to a new lattice, freezing out previous alternatives. If they are circular, they lead again to ambivalence, the "problem" of rational choice, the possibility of paralysis.
21. Psychosis amounts to defects in the network's structure (Lacanian "foreclosure")
All pathologies mentioned above still assume that the entirety of the Network can be travelled through. But what would happen if a term tabooed were playing a key communication role within the Network? What if it were a necessary point of passage or, resorting to Freud's metaphor, what if the flooding is such that it becomes simply impossible to move anymore from a particular valley to neighbouring ones? The pathologies mentioned in section 20 all assume that the connectedness of the Network still holds. Some key edges might be such that, were they removed, a graph connected so far would break down into a number of separate ones, two at least. These are called "bridges": "an edge is a 'bridge' (or isthmus) if its removal increases the number of components of the graph" (Bollobas 1979: 5). Should the neurotic taboo affect one of those and the talking subject is forced to generate speech acts from within one of the — at least — two disconnected Networks, being unable to access whatever material is stored in the others. This would have two consequences: (i) speech acts may become incoherent, for lack of their indispensable elements; (ii) explanation would have to remain restricted to what is available locally, being unable to summon what might be the more plausible factors that a normal subject would put forth-, as relaxation of the gradient descent cannot be fully achieved, compensation may induce to local "explanatory overkill" as is observed in paranoia; (iii) the disconnected parts of the network having ceased to communicate, they generate speech independently: the emergence of speech acts from another part of the network is perceived as being from an external source by every other part.
French psychoanalyst Lacan's theory of psychosis is like I have just described. What he calls "foreclosure" in the etiology of madness is the inaccessibility of a "signifiant-maître": a "master-signifier" , a "lord of all words". Such are for instance the father's name, or the word "mother". A "master-signifier" is a term of early inscription, likely therefore to be part of a high number of word-pairs. Lacan said in one of his seminars in 1955-56, "If there are things that the patient has no wish to deal with, even in terms of repression, this entails another type of mechanism. [...] What do I mean when I say Verwerfung ("foreclosure")? It refers to the exclusion of a primal signifier, which will forever be missing in this location. This is the fundamental mechanism I am supposing as the source of paranoia" (Lacan [1955-56]; 1981: 170-171]. In terms of the Network we see a "master-signifier" as precisely what graph theory calls a bridge. Lacan adds: "... psychosis amounts to a hole, something missing at the level of signifiers." Would a word like "mother" becomes unavailable, the Network loses its connectedness: it breaks down in two or more smaller Networks, inaccessible to each other.
IV. Implications
22. Speech generation is automatic and only involves the four sources mentioned above
Speech performance apprehended as a reflexive mechanism does not live in a vacuum: it is always responsive in some manner to something which has preceded it. In other words, speech performance is properly speaking "dialectic", always interactive in the way of a dialogue - even, and this is the beauty of it, even when it is a monologue, even when it is the inner "dialogue" of silent thought. The overall process is that speech acts are received by a recipient and trigger at their destination reflexes of the same nature which had led to the triggering act itself. Indeed as a consequence of the process being automatic a.k.a. of a reflexive nature and unconscious and there being a time lag between utterance and hearing — the source and the recipient may very well be the same individual: which is why one hears oneself speak. Hence the following conclusion to what we have been expounding here: In discourse, every one of its component speech acts is the product of a reflex to a prior one — this having been produced by either a different locutor or by the same.
Such a Hebbian perspective achieves per se a synthesis between rational and emotional dynamics habitually seen as divergent principles of discourse generation. A Hebbian approach to word dynamics firstly accounts in an associationist manner for clause generation, where a speaking subject's prior history provides the template for later connections between concepts. Secondly, through the mechanism of weighted activation, a distinctive light is shed on signification. Indeed, in contrast with the classical view where the overall meaning of a clause results from serial processing of the words composing it, within the Hebbian framework the meaning of a sentence is a global three-dimensional packet of intermixing atomic meanings as provided by words (the concept is reminiscent of the scholastic notion of the complexus significabilis where words combined evoke a "state of affairs" — see Jorion 1997b). Such an approach is close to what the semantics of languages such as Chinese force on the linguist and underlines how often our current reflections derive from familiarity with a single Indo-European language.
23. Speech generation is deterministic
The first thing one says then is a pure reflex to whatever is happening. One responds with something which has worked before in similar circumstances. And in what way has it worked? Because it was an apt anticipation of what was to happen next. In other words, the affect dynamics is both effective and adaptive. By "adaptive" we mean that it fulfils a survival function and induces in the subject a feeling of well-being associated to living in optimal conditions, also that survival is more effectively attained with its help than it is without it. By "anticipating" I mean that like with unconditioned as well conditioned reflex — the prototypical case of the Pavlovian process: the dog salivating when he hears a bell ringing — it provides the subject with a beneficial readiness for things to come like reducing the overall effort, or increasing the chances of success, or forbidding some negative and therefore unwanted consequences, or that, as in the case of speech, it allows to have things done on your behalf without having to do them yourself, or that it allows to pushing one's advantage or alternatively to gearing for prompt and orderly retreat. In such perspective of automaticity there is no hidden conductor masterminding intentions of what is being said. This is all self-generated from outside the realm of purposive consciousness. Some of the things I say are triggered by what you said, some by the things I said myself — as soon as I heard myself saying them.
24. There is no room for any additional "supra-factor" in speech act generation than the four mentioned above
We've been drawing the concept of a machinery where a tenuous balance of inner satisfaction is very easily upset by needs and desires whose source might be internal as "bodily" but also speech-induced, as by an interlocutor or even by oneself: our speaking breeds our own discourse. Thus the source of what a person says is unconscious: the triggering of what is being said has taken place elsewhere than in consciousness.
Consciousness is hardly more than the time needed by our emotional dynamics to update itself in line with what we hear ourselves saying (either through the "outer" or through the "inner" ear). If this perspective is accepted then the role of consciousness gets deprived of its decision making role in the generation of rational discourse. Consciousness is real as opposed to illusory but its role is subsidiary and will not need to be reproduced in a machine meant by us to mimic intelligent sentence production.
25. One such superfluous "supra-factor" is "intentionality" triggered by consciousness or otherwise
The vacuity of the notion in the functionalist tenet (directly borrowed from popular psychology) that what one says, one first "intends" to say it, is even more evident with silent speech. In silent speech, would one first "intend" to say what one says silently? Or maybe "silent speech" is the intention itself; one hears oneself "hear in one's head" one's intention to speak. And when one speaks aloud, the intention, a.k.a. a "silent speech" version of one's speech performance precedes it, necessarily by a "split of a second" Libet's empirical research has shown the reverse to be true: the intention arises half a second after the act has been initiated (Libet 1981; 1992).
The progress we've been making in the past twenty years in our understanding of cognitive processes is short of extraordinary. Simultaneously, a debate has raged in different or sometimes in the same quarters on the nature and operation of consciousness. Few have reflected on how little the second debate has contributed to the progress of the first endeavour. The circumstances are reminiscent of an apocryphal story told about the astronomer Laplace, whom, having expounded his model of the universe to Napoleon, his Emperor, the latter remarked "But, Monsieur, I haven't heard of any place for the Divine Architect in your system". To which Laplace supposedly retorted "Your Highness, I had no need for such a hypothesis".
Indeed I see very little in the columns of Brain and Behavioural Sciences in the likeness of the so-called "functionalist model". With the single exception of neurones — of relatively recent discovery — such functionalist scheme does not depart noticeably from the popular psychology stamped in common speech since — at the very least — the ancient Greeks. Theories relative to the functioning of the human mind —starting with Plato's and Aristotle's — have consistently shown suspicion at the obviousness with which language inclines us to evoke states-of-affairs. Brentano's system represents therefore a return to the spontaneous apprehension of mental functions as the tongue proposes and displays accordingly the revealing signs of epistemological naiveté. The causal role which functionalism assigns to consciousness is crucial as it is it, and it only, which holds within the scheme the power of transforming an intention into an action. To deny consciousness such a function equates to depriving voluntary acts of an origin, and accordingly slaying the conceptual construct of functionalism altogether. Thus within this perspective, the supposedly "obvious absurdity" of any approach which presents consciousness as deprived of all decisional power. This is, at least, within the current paradigm of the mainstream cognitive sciences: would consciousness be conceived as powerless in the generation of speech performance, the functionalist scheme simply evaporates. Consciously driven thought supposes an intention of speaking the form of which can only be a silent repetition of speech before being spoken. The notion of a central subject intending to utter the sentences generated is therefore only distracting as it shifts the attention from the actual performance of speech acts to the unknown machinery of the "intentionality" supposedly lurking in the background of speech performance.
It may seem that our disciplines will come up soon with a complete picture of the workings of the mind without such a functionalist scheme having ever been summoned. Does this mean that consciousness may not play any role in a fully developed model of the human psyche? The very fact that it has not been required so far means in any case that consciousness might not play any crucial role at all in the eventual picture. A couple of years ago in a French journal (Jorion 1999) I proposed something of the kind: a model where only a limited and essentially passive role is left to consciousness, such approach solving paradoxically more theoretical problems than the more common reverse view of consciousness in the driving seat.
References (incomplete):
*Aristote, 1949 Categories, in Aristotle I, trad. H. P. Cooke & H. Tredennick, Loeb Classical Library, Cambridge (Mass.) : Harvard University Press
*Aristote, 1960 Topicas, in Aristotle 11, trad. H. Tredennick & E.S. Forster, Loeb Classic Library, London : Heinemann, Cambridge (Mass.) : Harvard University Press
Aristotle, On the Soul
Aristote, 1936 De la mémoire et de la remémoration, in Aristotle VIII (Parvia Naturalia), trad. W.S.Hett, Loeb Classical Library. Cambridge (Mass.) Harvard University Press
- Barbut and Monjardet (1970)
- *Benveniste, E., 1966 Problèmes de linguistique générale, Paris : Gallimard
- *Birkhoff (1967)
- *Blanché, R. 1970 La logique et son histoire d'Aristote à Russell, Paris : Armand Colin
- Bollobas 1979
- Churchland, Patricia Smith, "On the alleged backwards referral of experiences and its relevance to the mind-body problem », Philosophy of Science, 48, 1981 : 165-181
- *Bulmer 1979
- Clahsen (2000)
- *Damasio
- *D'Andrade 197.
- *Duquenne (1987)
- *Duquenne (1992)
- *Edelman 1987
- Evans-Pritchard's Nuer Religion
- Finkel, Reeke & Edelman 1989
- *Freeman and White 1993
- Freud 1895 Sketch
- Freud 1894 Les psychonévroses de défense
- Freud 1900 The interpretation of dreams
- Freud 1917
- *Gilson, E. 1922 La philosophie au moyen age, Paris Payot
- *Gilson, E. 1927 Le Thomisme, Paris :Vrin
- Graham, A. C. 1989. Disputers of the Tao: Philosophical Arguments in Ancient China. La Salle, Ill: Open Court.
- Graham, A. C. 1990 Studies in Chinese Philosophy and Philosophical Literature
- Granet, M. 1934 La pensée chinoise, Paris : Albin Michel
- *Grice 1975
- *Grice 1978
- *Griswold 1990:
- *Grodzinsky, Yosef 2000 "The neurology of syntax: Language use without Broca' s area" Behavioral and Brain Sciences
- *Hamelin, Octave, 1985 [1905] Le systeme d'Aristote, Paris : Vrin
- *Hansen, Chad, 1983 Language and Logic in Ancient China, Ann Arbor: The University of Michigan Press
- *Imbert, Claude, 1999 Pour une histoire de la logique. Un héritage platonicien, Paris: P.U.F.
- Jorion 1988 Anella
- *Jorion, Paul, 1989 « Intelligence artificielle et mentalité primitive. Actualité de quelques concepts levy-bruhliens », Revue Philosophique, 4 515-541.
- *Jorion, Paul 1990a Principes des systemes intelligents, Paris : Masson
- *Jorion, Paul, « An alternative neural network representation for conceptual knowledge », communication présentée à la British TELECOM, CONNEX Conference, Martlesham Heath, January 1990b, 23 pp ; http://cogprints. soton. ac. uk/abs/comp/199806036
- Jorion, Paul, « L'intelligence artificielle : au confluent des neurosciences et de l'informatique », Lekton, vol.IV, 2, 1994a : 85-114
- Jorion, Paul, 1996 « La linguistique d'Aristote », in V. Rialle & D. Fisette (eds.), Penser l'esprit : Des sciences de la cognition a une philosophie cognitive, Grenoble : Presses Universitaires de Grenoble : 261-287
- Jorion, Paul, « Ce qui fait encore cruellement défaut a l'Intelligence artificielle », Informations In Cognito, No 7, 1997a : 1-4
- Jorion, Paul, « Jean Pouillon et he mystere de la chambre chinoise », L'Homme, 143, 1997b : 91-99
- Jorion, Paul, « Le miracle grec » in Papiers du College International de Philosophie, N° 51 Reconsfitutions, 2000a : 17-38 ,
- Jorion, Paul & G. Delbos, 1985 « Truth is shared bad faith. Common ground and presupposition in the light of a dialectical model of conversational pragmatics », in J. Allwood & E. Hjelmquist (eds.), Foregrounding Background, Lund : Doxa, 87-97.
*Jorion & Lally 1984
*Jung, Carl, « Psychoanalysis and association experiments », 1906, in Carl Jung, The Collected Works, Volume Two : Experimental Researches, London : Routledge & Kegan Paul, 1973 : 288-317
Lacan [1955-56] 1981
Libet, Benjamin, « The experimental evidence for subjective referral of a sensory experience backwards in time : Reply to P. S. Churchland », Philosophy of Science, 48, 1981 : 182-197
Libet, Benjamin, « The Neural Time-Factor in Perception, Volition and Free Will », Revue de Metaphysique et de Morale, 2, 1992 : 255-272
*Lukasiewicz , Jan 1998 [1951] Aristotle's Syllogistic From the Standpoint ofModern Formal Logic (2d ed. enlarged), Oxford: Oxford University Press
*Mach, E. 1960 [1863] The Science ofMechanics : A Critical and Historical Account of its Development, La Salle (Ill.) : The Open Court
MacKinnon, Edward E., 1982 Scientific Explanation and Atomic Physics, Chicago Chicago University Press
Merleau-Ponty, Maurice, "Sur la phénoménologie du langage » (1951) in Signes, Paris : Gallimard, 1960 : 105-122
Milner 1989
*Moody, E. A. 1953 Truth and consequence in Mediaeval Logic, Amsterdam : North-Holland
*Page 2000
*Plato Sophist Pribram & Gill 1976
*Ross, W. D., 1923 Aristotle, London : Methuen
*Rubin 1986 Autobiographical Memory
*Ryle, G., 1954 Dilemmas, The Tamer Lectures 1983, Cambridge : Cambridge University Press
*Saks ...
Searle, John R., « Consciousness and the Philosophers », The New York Review of Books, Vol. XLIV, 4, March 6, 1997 : 43-50
Sextus Empiricus, 1936 Sextus Empiricus, trad. R.G. Bury, London: Heinemann
Smolensky 1989
Tesniere, L., 1982 Eléments de syntaxe structurale, Paris : Klincksieck
*Thom, R. 1988 Esquisse d'une Sémiophysique. Physique aristotélicienne et Théorie des Catastrophes, Paris : InterEditions
*Vuillemin, J. 1967 De la logique à la théologie, Cinq études sur Aristote, Paris : Flammarion
*White and Jorion 1992
*White and Jorion 1996
*Wille (1982)
Wittgenstein 1966
*Wittgenstein [1953] 1963
0 comments
Comments sorted by top scores.