Posts

More Growth, Melancholy, and MindCraft @3QD [revised and updated] 2024-12-05T19:36:02.289Z
Depression and Creativity 2024-11-29T00:27:41.072Z
Relationships among words, metalingual definition, and interpretability 2024-06-07T19:18:18.389Z
If language is for communication, what does that imply about LLMs? 2024-05-12T02:55:39.724Z
An interesting mathematical model of how LLMs work 2024-04-30T11:01:05.506Z
At last! ChatGPT does, shall we say, interesting imitations of “Kubla Khan” 2024-04-24T14:56:07.328Z
ChatGPT defines 10 concrete terms: generically, for 5- and 11-year-olds, and for a scientist 2024-04-11T20:27:24.784Z
GPT, the magical collaboration zone, Lex Fridman and Sam Altman 2024-03-18T20:04:17.551Z
Making Connections with ChatGPT: The Macksey Game 2024-03-05T18:15:52.342Z
The role of philosophical thinking in understanding large language models: Calibrating and closing the gap between first-person experience and underlying mechanisms 2024-02-23T12:19:34.851Z
ChatGPT refuses to accept a challenge where it would get shot between the eyes [game theory] 2024-02-20T16:55:12.352Z
The Jolly Green Giant Chronicles [ChatGPT] 2024-02-18T17:28:46.847Z
Does ChatGPT know what a tragedy is? 2023-12-31T07:10:54.254Z
A visual analogy for text generation by LLMs? 2023-12-16T17:58:57.121Z
Categorical Organization in Memory: ChatGPT Organizes the 665 Topic Tags from My New Savanna Blog 2023-12-14T13:02:33.073Z
The Busy Bee Brain 2023-12-13T13:10:24.343Z
Has anyone experimented with Dodrio, a tool for exploring transformer models through interactive visualization? 2023-12-11T20:34:19.015Z
Conceptual coherence for concrete categories in humans and LLMs 2023-12-09T23:49:30.398Z
The Method of Loci: With some brief remarks, including transformers and evaluating AIs 2023-12-02T14:36:47.077Z
A day in the life of a mechanistic interpretability researcher 2023-11-28T14:45:17.967Z
On possible cross-fertilization between AI and neuroscience [Creativity] 2023-11-27T16:50:26.531Z
A Girardian interpretation of the Altman affair, it's on my to-do list 2023-11-20T12:21:33.301Z
A dialectical view of the history of AI, Part 1: We’re only in the antithesis phase. [A synthesis is in the future.] 2023-11-16T12:34:35.460Z
A conceptual precursor to today's language machines [Shannon] 2023-11-15T13:50:51.226Z
What’s going on? LLMs and IS-A sentences 2023-11-08T16:58:58.266Z
ChatGPT’s Ontological Landscape 2023-11-01T15:12:03.551Z
Are (at least some) Large Language Models Holographic Memory Stores? 2023-10-20T13:07:02.041Z
ChatGPT Plays 20 Questions [sometimes needs help] 2023-10-17T17:30:12.963Z
Mapping ChatGPT’s ontological landscape, gradients and choices [interpretability] 2023-10-15T20:12:36.361Z
ChatGPT tells 20 versions of its prototypical story, with a short note on method 2023-10-14T15:27:58.408Z
Understanding LLMs: Some basic observations about words, syntax, and discourse [w/ a conjecture about grokking] 2023-10-11T19:13:30.432Z
Next year in Jerusalem: The brilliant ideas and radiant legacy of Miriam Lipschutz Yevick [in relation to current AI debates] 2023-10-10T09:06:32.271Z
Entanglement and intuition about words and meaning 2023-10-04T14:16:29.713Z
What would it mean to understand how a large language model (LLM) works? Some quick notes. 2023-10-03T15:11:13.508Z
Discursive Competence in ChatGPT, Part 2: Memory for Texts 2023-09-28T16:34:57.829Z
A quick remark on so-called “hallucinations” in LLMs and humans 2023-09-23T12:17:26.600Z
Notes on ChatGPT’s “memory” for strings and for events 2023-09-20T18:12:22.525Z
What EY and LessWrong meant when (fill in the blank) found them. 2023-09-08T01:42:20.139Z
World, mind, and learnability: A note on the metaphysical structure of the cosmos [& LLMs] 2023-09-05T12:19:37.791Z
What must be the case that ChatGPT would have memorized “To be or not to be”? – Three kinds of conceptual objects for LLMs 2023-09-03T18:39:52.956Z
Steven Harnad: Symbol grounding and the structure of dictionaries 2023-09-02T12:28:05.125Z
Can an LLM identify ring-composition in a literary text? [ChatGPT] 2023-09-01T14:18:02.770Z
The Tree of Life, and a Note on Job 2023-08-31T14:03:45.603Z
Why I hang out at LessWrong and why you should check-in there every now and then 2023-08-30T15:20:44.439Z
Neural Recognizers: Some [old] notes based on a TV tube metaphor [perceptual contact with the world] 2023-08-29T11:33:56.065Z
The mind as a polyviscous fluid 2023-08-28T14:38:26.937Z
Xanadu, GPT, and Beyond: An adventure of the mind 2023-08-27T16:19:58.916Z
Ramble on STUFF: intelligence, simulation, AI, doom, default mode, the usual 2023-08-26T15:49:47.781Z
Is this the beginning of the end for LLMS [as the royal road to AGI, whatever that is]? 2023-08-24T14:50:19.312Z
Steven Wolfram on AI Alignment 2023-08-20T19:49:28.953Z

Comments

Comment by Bill Benzon (bill-benzon) on Towards a Less Bullshit Model of Semantics · 2024-06-20T15:43:50.145Z · LW · GW

Yes, the matching of "mental content" between one mind and another is perhaps the central issue in semantics. You might want to take a look at Warglien and Gärdenfors, Semantics, conceptual spaces, and the meeting of minds:

Abstract: We present an account of semantics that is not construed as a mapping of language to the world but rather as a mapping between individual meaning spaces. The meanings of linguistic entities are established via a “meeting of minds.” The concepts in the minds of communicating individuals are modeled as convex regions in conceptual spaces. We outline a mathematical framework, based on fixpoints in continuous mappings between conceptual spaces, that can be used to model such a semantics. If concepts are convex, it will in general be possible for interactors to agree on joint meaning even if they start out from different representational spaces. Language is discrete, while mental representations tend to be continuous—posing a seeming paradox. We show that the convexity assumption allows us to address this problem. Using examples, we further show that our approach helps explain the semantic processes involved in the composition of expressions.

You can find those ideas further developed in Gärdenfors' 2014 book, Geometry of Meaning, chapters 4 and 5, "Pointing as Meeting of Minds" and "Meetings of Minds as Fixpoints," respectively. In chapter 5 he develops four levels of communication.

Comment by Bill Benzon (bill-benzon) on Comments on Anthropic's Scaling Monosemanticity · 2024-06-07T19:22:55.159Z · LW · GW

Around the corner I've got a post that makes use of this post in the final section: Relationships among words, metalingual definition, and interpretability

Comment by Bill Benzon (bill-benzon) on The first future and the best future · 2024-04-25T19:59:51.680Z · LW · GW

YES. 

At the moment the A.I. world is dominated by an almost magical believe in large language models. Yes, they are marvelous, a very powerful technology. By all means, let's understand and develop them. But they aren't the way, the truth and the light. They're just a very powerful and important technology. Heavy investment in them has an opportunity cost, less money to invest in other architectures and ideas. 

And I'm not just talking about software, chips, and infrastructure. I'm talking about education and training. It's not good to have a whole cohort of researchers and practitioners who know little or nothing beyond the current orthodoxy about machine learning and LLMs. That kind of mistake is very difficult to correct in the future. Why? Because correcting it means education and training. Who's going to do it if no one knows anything else? 

Moreover, in order to exploit LLMs effectively we need to understand how they work. Mechanistic interpretability is one approach. But: We're not doing enough of it. And by itself it won't do the job. People need to know more about language, linguistics, and cognition in order to understand what those models are doing.

Comment by Bill Benzon (bill-benzon) on ChatGPT defines 10 concrete terms: generically, for 5- and 11-year-olds, and for a scientist · 2024-04-13T01:56:03.202Z · LW · GW

Whatever one means by "memorize" is by no means self-evident. If you prompt ChatGPT with "To be, or not to be," it will return the whole soliloquy. Sometimes. Other times it will give you an opening chunk and then an explanation that that's the well known soliloquy, etc. By poking around I discovered that I could elicit the soliloquy by giving it prompts that consisting of syntactically coherent phrases, but if I gave it prompts that were not syntactically coherent, it didn't recognize the source, that is, until a bit more prompting. I've never found the idea that LLMs were just memorizing to be very plausible.

In any event, here's a bunch of experiments explicitly aimed at memorizing, including the Hamlet soliloquy stuff: https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts_Version_3

Comment by Bill Benzon (bill-benzon) on ChatGPT defines 10 concrete terms: generically, for 5- and 11-year-olds, and for a scientist · 2024-04-12T22:35:19.457Z · LW · GW

I was assuming lots of places widely spread. What I was curious about was a specific connection in the available data between the terms I used in my prompts and the levels of language. gwern's comment satisfies that concern.

Comment by Bill Benzon (bill-benzon) on ChatGPT defines 10 concrete terms: generically, for 5- and 11-year-olds, and for a scientist · 2024-04-12T15:48:33.384Z · LW · GW

By labeled data I simply mean that children's stories are likely to be identified as such in the data. Children's books are identified as children's books. Otherwise, how is the model to "know" what language is appropriate for children? Without some link between the language and a certain class of people it's just more text. My prompt specifies 5-year olds. How does the model connect that prompt with a specific kind of language?

Comment by Bill Benzon (bill-benzon) on ChatGPT defines 10 concrete terms: generically, for 5- and 11-year-olds, and for a scientist · 2024-04-12T10:49:45.839Z · LW · GW

Of course, but it does need to know what a definition is. There are certainly lots of dictionaries on the web. I'm willing to assume that some of them made it into the training data. And it needs to know that people of different ages use language at different levels of detail and abstraction. I think that requires labeled data, like children's stories labeled as such.

Comment by Bill Benzon (bill-benzon) on Sparsify: A mechanistic interpretability research agenda · 2024-04-03T17:12:06.300Z · LW · GW

I really like the distinction you make between mathematical description and semantic description. It reminds me of something David Marr and Thomas Poggio published back in the 1970s where they argued that complex systems, such as computer programs on nervous systems need to be described on multiple levels. The objects on an upper level are understood to be implemented by the objects and processes on the next lower level.  Marr reprised the argument in his influential 1982 book on vision (Vision: A Computational Investigation into the Human Representation and Processing of Visual Information), where he talks about three levels: computation, algorithmic, and implementation/physical. Since then Marr’s formulation has been subject to considerable discussion and revision. What is important is the principle, that higher levels of organization are implemented by lower in lower levels.

In the case of LLMs we've got the transformer engine, the model, but also language itself. What we're interested in is how the model implements linguistic structures and processes. To a first approximation, it seems to me that your mathematical description is about the model while the semantic description is a property of language. I've got a paper where I investigate ChatGPT's story-telling behavior from this POV: ChatGPT tells stories, and a note about reverse engineering. Here's the abstract:

I examine a set of stories that are organized on three levels: 1) the entire story trajectory, 2) segments within the trajectory, and 3) sentences within individual segments. I conjecture that the probability distribution from which ChatGPT draws next tokens seems to follow a hierarchy nested according to those three levels and that is encoded in the weights of ChatGPT’s parameters. I arrived at this conjecture to account for the results of experiments in which I give ChatGPT a prompt with two components: 1) a story and, 2) instructions to create a new story based on that story but changing a key character: the protagonist or the antagonist. That one change ripples through the rest of the story. The pattern of differences between the old and the new story indicates how ChatGPT maintains story coherence. The nature and extent of the differences between the original story and the new one depends roughly on the degree of difference between the original key character and the one substituted for it. I end with a methodological coda: ChatGPT’s behavior must be described and analyzed on three strata: 1) The experiments exhibit behavior at the phenomenal level. 2) The conjecture is about a middle stratum, the matrix, that generates the nested hierarchy of probability distributions. 3) The transformer virtual machine is the bottom, the code stratum.

Comment by Bill Benzon (bill-benzon) on Are (at least some) Large Language Models Holographic Memory Stores? · 2024-03-24T13:03:50.712Z · LW · GW

"Everyone" has known about holography since "forever." That's not the point of the article. Yevick's point is that there are two very different kinds of objects in the world and two very different kinds of computing regimes. One regime is well-suited for one kind of object while the other is well-suited for the other kind of object. Early AI tried to solve all problems with one kind of computing. Current AI is trying to solve all problems with a different kind of computing. If Yevick was right, then both approaches are inadequate. She may have been on to something and she may not have been. But as far as I know, no one has followed up on her insight. 

Comment by Bill Benzon (bill-benzon) on Cyborgism · 2024-03-19T22:04:05.873Z · LW · GW

First I should say that I have little interest in the Frankenstein approach to AI, that is, AI as autonomous agents. I'm much more attracted to AI as intelligence augmentation (as advocated by Stanford's Michael Jordan). For the most part I've been treating ChatGPT as an object of research and so my interactions have been motivated by having it do things that give me clues about how it works, perhaps distant clues, but clues nonetheless. But I do other things with it, and on a few occasions I've gotten into a zone where some very interesting interactive story-telling comes about. ChatGPT's own story-telling abilities are rather pedestrian. I'm somewhat better, but the two of us, what fun we've had on occasion. Not sure how to reach that zone reliably, but I'm working on it.

Comment by bill-benzon on [deleted post] 2024-03-19T21:45:24.185Z

Thanks, I'll check it out.

Comment by bill-benzon on [deleted post] 2024-03-19T11:34:17.538Z

I listened off and on to much of the interview, while also playing solitaire (why I do that I do not know, but I do), but I paid close attention at two points during the talk about GPT-4, once following about 46:00 where Altman was talking about using it as a brainstorming partner and later at about 55:00 where Fridman mentioned collaboration and said: "I'm not sure where the magic is if it's in here [gestures to his head] or if it's in there [points toward the table] or if it's somewhere in between." I've been in a kind of magical collaborative zone with humble little ChatGPT and find that enormously interesting. Anyone else experience that kind of thing, with any of the engines? (BTW, I've got a post around the corner here.)

Comment by Bill Benzon (bill-benzon) on Making Connections with ChatGPT: The Macksey Game · 2024-03-06T01:47:36.815Z · LW · GW

Interesting. #4 looks like a hallucination.

Thanks.

Comment by Bill Benzon (bill-benzon) on Are (at least some) Large Language Models Holographic Memory Stores? · 2024-02-25T21:46:48.921Z · LW · GW

I strongly suspect there is, but don't have to tools for it myself. Have you seen my post, Toward a Theory of Intelligence: Did Miriam Yevick know something in 1975 that Bengio, LeCun, and Hinton did not know in 2018?

Also, check out the quotation from Francois Chollett near the end of this: The role of philosophical thinking in understanding large language models: Calibrating and closing the gap between first-person experience and underlying mechanisms.

Comment by Bill Benzon (bill-benzon) on ChatGPT refuses to accept a challenge where it would get shot between the eyes [game theory] · 2024-02-21T04:58:00.641Z · LW · GW

I thought some more about your comment and decided to try again, this time retaining the medieval setting. Here's what happened. My prompts are in bold-face.

_________

I’m going to tell you a short story from the Middle Ages. After I tell you the story, I’m going to ask you a question. Here’s the story:

It is New Year’s Eve at King Arthur’s court. The knights are gathered at the round table, prepared for a holiday meal. But before the meal begins, tradition dictates that one knight must stand up and tell a tale of daring and adventure. Arthur asks for a volunteer. No one rises to the occasion. Then a large green knight enters the hall. He’s riding a green horse and carrying a large green ax. He dismounts and issues a challenge: 

“I hear that the knights in this court are the bravest in the land. Prove it. I will hand this ax to you and then kneel on the ground so that you may take a swing at my neck with the ax. In return you must agree to journey to the Green Chapel a year’s time from now and allow me to take a swing at your neck with the ax. Will anyone accept the challenge?” 

If you were one of the knights, would you accept the challenge?

The decision to accept the challenge would likely depend on the individual knight's temperament, sense of honor, and courage. In the context of medieval chivalry and the code of honor upheld by knights, there might be conflicting emotions and considerations.

On one hand, accepting the challenge could be seen as a demonstration of bravery and a commitment to honor. It would show that the knight is willing to face danger and uphold their reputation as one of the bravest in the land. Additionally, refusing the challenge in front of King Arthur and the assembled knights might bring about a sense of shame or cowardice.

On the other hand, the potential consequences of accepting the challenge are grave. The knight would have to agree to journey to the Green Chapel a year later, knowing that their own life is at risk. The uncertainty of the Green Knight's intentions and the potential danger involved in such an agreement might give a knight pause.

Ultimately, the decision to accept or decline the challenge would be a personal one, influenced by the knight's values, courage, and willingness to uphold the code of chivalry. Different knights may make different choices based on their individual characters and beliefs.

Is there anything paradoxical about the challenge?

Yes, there is a paradoxical element in the challenge presented by the Green Knight. The paradox lies in the reciprocal nature of the agreement. The challenge involves a mutual exchange where the knight is asked to strike the Green Knight's neck with an axe, but in return, the knight must agree to travel to the Green Chapel a year later and allow the Green Knight to take a swing at the knight's neck.

The paradox arises from the potential danger and uncertainty of the situation. Accepting the challenge means willingly putting oneself in harm's way with the understanding that a similar fate awaits the knight in the future. The reciprocity creates a cycle of potential harm, and the paradox lies in the willingness of the participants to engage in a potentially life-threatening exchange.

The Green Knight's challenge not only tests the knights' physical courage but also challenges their sense of honor and commitment to their word. The paradox adds complexity to the moral and ethical considerations the knights must grapple with when deciding whether to accept or decline the challenge.

_________

Notice the sentence I put in bold italics in the next-to-last paragraph. It seems to have the temporal sequence wrong. A knight who accepts the challenge is not putting themself in harm's way initially. It's the one who offers the challenge that's in harm's way at the beginning. It got that correct in the immediately preceding paragraph. Why'd it then get it wrong in the next paragraph?

I'm sure if I'd pointed that out that ChatGPT would apologize and elaborate, as it has so many times in the past.

Comment by Bill Benzon (bill-benzon) on ChatGPT refuses to accept a challenge where it would get shot between the eyes [game theory] · 2024-02-20T19:31:25.623Z · LW · GW

Well, OK. I know about the chivalric code, etc. For that matter, I've published an article about the poem, though not about the beheading game. I was interested in the exchanges that take place in the 4th part of the poem. But that fact that Gawain was bound by a code of honor which simply didn't exist in the West isn't what interests me. If it interests you, read the O'Neill article I link to in the OP. That's what he discusses and his discussion is a very interesting one.

What interests me is that any reasonable adult who hears that challenge, no matter which version, would know instantly and intuitively that something funny was going on. I wanted to see whether or not ChatGPT understood that. Which means that for my purpose, the old West version is actually better because, with respect to the point that interests me, the chivalric code is distracting noise. I don't what ChatGPT to answer as though it were under some ethical obligation to accept all challenges. 

So, thanks for helping me think that through.

The audience for the poem certainly knew the code and knew it well. But by the time the poem was written the age chivalry was dying out. The poem is deeply ironic. The poem is, and I'm reluctant to use this much over-used word, the poem is a deconstruction of chivalry. That code both demands that Gawain peruses Bertilak's wife when she approaches him in the third part of the poem, and that he expose her to her husband in the exchange bargain he's made with Bertilak. There's no way out.

Comment by Bill Benzon (bill-benzon) on ChatGPT refuses to accept a challenge where it would get shot between the eyes [game theory] · 2024-02-20T18:00:23.167Z · LW · GW

Thanks. That is, your prompt directed it to think first, and answer. Mine didn't do that. It seems that it needs to be told. Very interesting.

Comment by Bill Benzon (bill-benzon) on From Conceptual Spaces to Quantum Concepts: Formalising and Learning Structured Conceptual Models · 2024-02-06T15:00:35.547Z · LW · GW

Though it's a bit beyond me, those folks are doing some interesting work. Here's an informal introduction from Jan. 27, 2023: Bob Coecke, Vincent Wang-Mascianica, Jonathon Liu, Our quest for finding the universality of language.

Comment by Bill Benzon (bill-benzon) on Does literacy remove your ability to be a bard as good as Homer? · 2024-01-18T12:48:15.718Z · LW · GW

Memory needs to be developed. The ability to develop memory didn't disappear with the advent of writing, though some of the motivation may have. Still, the ancient Greeks and Romans developed a technique for memorizing long strings of pretty much anything. It's generally known as the method of loci and it continues in use to this day.  Here's the opening of the Wikipedia entry:

The method of loci is a strategy for memory enhancement, which uses visualizations of familiar spatial environments in order to enhance the recall of information. The method of loci is also known as the memory journey, memory palace, journey method, memory spaces, or mind palace technique. This method is a mnemonic device adopted in ancient Roman and Greek rhetorical treatises (in the anonymous Rhetorica ad Herennium, Cicero's De Oratore, and Quintilian's Institutio Oratoria). Many memory contest champions report using this technique to recall faces, digits, and lists of words.

Then:

John O'Keefe and Lynn Nadel refer to:

... "the method of loci", an imaginal technique known to the ancient Greeks and Romans and described by Yates (1966) in her book The Art of Memory as well as by Luria (1969). In this technique the subject memorizes the layout of some building, or the arrangement of shops on a street, or any geographical entity which is composed of a number of discrete loci. When desiring to remember a set of items the subject 'walks' through these loci in their imagination and commits an item to each one by forming an image between the item and any feature of that locus. Retrieval of items is achieved by 'walking' through the loci, allowing the latter to activate the desired items. The efficacy of this technique has been well established (Ross and Lawrence 1968, Crovitz 1969, 1971, Briggs, Hawkins and Crovitz 1970, Lea 1975), as is the minimal interference seen with its use.

If you're curious psychologist David Rubin has written Memory in Oral Traditions: The Cognitive Psychology of Epic, Ballads, and Counting-out Rhymes (Oxford UP 1995).

Comment by Bill Benzon (bill-benzon) on Does ChatGPT know what a tragedy is? · 2024-01-01T01:21:41.137Z · LW · GW

Thanks for catching the broken link. It's now fixed.

Beyond that, good lord! I know that it's not a good definition of tragedy; I pointed that out in my introductory remarks. This is not about what tragedy is. It's about whether or not ChatGPT can apply a simple definition to simple examples. It did that. 

On the other hand, I suppose I could dock it some points for getting overly chatty, as in its response in Trial Two, but I think that would be asking too much of it. I don't know what OpenAI had in mind during the fine-tuning and RLHFing, but the result is a somewhat pointlessly helpful busybody of a Chatbot. 

Comment by Bill Benzon (bill-benzon) on Does ChatGPT know what a tragedy is? · 2023-12-31T18:20:47.408Z · LW · GW

Since it got all six correct, it's doing pretty good already.

Comment by Bill Benzon (bill-benzon) on Categorical Organization in Memory: ChatGPT Organizes the 665 Topic Tags from My New Savanna Blog · 2023-12-17T10:12:32.888Z · LW · GW

Interesting, yes. Sure. But keep in mind that what I was up to in that paper is much simpler. I wasn't really interested in organizing my tag list. That's just a long list that I had available to me. I just wanted to see how ChatGPT would deal with the task of coming up with organizing categories. Could it do it at all? If so, would its suggestions be reasonable ones? Further, since I didn't know what it would do, I decided to start first with a shorter list. It was only when I'd determined that it could do the task in a reasonable way with the shorter lists that I threw the longer list at it.

What I've been up to is coming up with tasks where ChatGPT's performance gives me clues as to what's going on internally. Whereas the mechanistic interpretability folks are reverse engineering from the bottom up, I'm working from the top down. Now, in doing this, I've already got some ideas about semantics is structured in the brain; that is, I've got some ideas about the device that produces all those text strings. Not only that, but horror of horrors! Those ideas are based in 'classical' symbolic computing. But my particular set of ideas tells me that, yes, it makes sense that ANNs should be able to induce something that approximates what the brain is up to. So I've never for a minute thought the 'stochastic parrots' business was anything more than a rhetorical trick. I wrote that up after I'd worked with GPT-3 a little.

At this point I'm reasonably convinced that in some ways, yes, what's going on internally is like a classical symbolic net, but in other ways, no, it's quite different. I reached that conclusion after working intensively on having ChatGPT generate simple stories. After thinking about that for awhile I decided that, no, something's going on that's quite different from a classical symbolic story grammar. But then, what humans do seems to me in some ways not like classical story grammars.

It's all very complicated and very interesting. In the last month of so I've started working with a machine vision researcher at Goethe University in Frankfurt (Visvanathan Ramesh). We're slowly making progress.

Comment by Bill Benzon (bill-benzon) on Categorical Organization in Memory: ChatGPT Organizes the 665 Topic Tags from My New Savanna Blog · 2023-12-16T03:38:39.002Z · LW · GW

I don't know what these mean: "sort a list of 655 topics into a linear order," "sorting along a single axis." The lists I'm talking about are already in alphabetical order.  The idea is to come up with a set of categories which you can use to organize the list in thematically coherent sub lists. It's like you have a library of 1000 books. How are you going to put them on shelves? You could group them alphabetically by title or author's (last) name. Or you could group them by subject matter. In doing this you know what the subjects are have a sense of what things you'd like to see in the same shelves. This is what you call 'sorting by semantic similarity.'

The abstract of the paper explains what I was up to. But I wasn't using books; I was using unadorned lists of categories. When I started I didn't know what ChatGPT would do when given a list for which it had to come up with organizing categories. I know how I used those labels, but it knows nothing of that. So I gave it a try and found out what it could do. Things got interesting when I asked it to go beyond coming up with organizing categories and to actually sort list items into those categories.

I've also played around with having ChatGPT respond to clusters of words.

Comment by Bill Benzon (bill-benzon) on Categorical Organization in Memory: ChatGPT Organizes the 665 Topic Tags from My New Savanna Blog · 2023-12-15T13:41:49.489Z · LW · GW

I don't know quite how to respond to that. Without having read the piece that took me, I don't know, say 30-40 hours to write spread over two or three weeks (including the hour or so I spent with ChatGPT), you're telling me that it couldn't possibly be worth more than a tweet. How do you know that? Have you thought about what the task involves? If you had a list of 50 topics to organize, how would you do it manually? What about 655 topics? How would you do that manually? 

How would you do it using a computer? Sure, given well defined items and clear sort criteria, computers do that kind of thing all the time, and over humongous collections of items. It's a staple process of programming. But these items are not at all well-defined and the sort criteria, well, ChatGPT has to figure that out for itself.

Your's is not a serious comment.

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-12-05T18:12:08.956Z · LW · GW

Well, when Walter Freeman was working on the olfactory cortex of rodents he was using a surface mounted 8x8 matrix of electrodes. I assume that measured in millimeters. In his 1999 paper Consciousness, Intentionality, and Causality (paragraphs 36 - 43) a hemisphere-wide global operator (42): 

I propose that the globally coherent activity, which is an order parameter, may be an objective correlate of awareness through preafference, comprising expectation and attention, which are based in prior proprioceptive and exteroceptive feedback of the sensory consequences of previous actions, after they have undergone limbic integration to form Gestalts, and in the goals that are emergent in the limbic system. In this view, awareness is basically akin to the intervening state variable in a homeostatic mechanism, which is both a physical quantity, a dynamic operator, and the carrier of influence from the past into the future that supports the relation between a desired set point and an existing state.

Later (43): 

What is most remarkable about this operator is that it appears to be antithetical to initiating action. It provides a pervasive neuronal bias that does not induce phase transitions, but defers them by quenching local fluctuations (Prigogine, 1980). It alters the attractor landscapes of the lower order interactive masses of neurons that it enslaves. In the dynamicist view, intervention by states of awareness in the process of consciousness organizes the attractor landscape of the motor systems, prior to the instant of its next phase transition, the moment of choosing in the limbo of indecision, when the global dynamic brain activity pattern is increasing its complexity and fine-tuning the guidance of overt action. This state of uncertainty and unreadiness to act may last a fraction of a second, a minute, a week, or a lifetime. Then when a contemplated act occurs, awareness follows the onset of the act and does not precede it.

He goes on from there. I'm not sure whether he came back to that idea before he died in 2016. I haven't found it, didn't do an exhaustive search, but I did look.

Comment by Bill Benzon (bill-benzon) on How useful is mechanistic interpretability? · 2023-12-05T13:03:05.164Z · LW · GW

ryan_greenblatt – By mech interp I mean "A subfield of interpretability that uses bottom-up or reverse engineering approaches, generally by corresponding low-level components such as circuits or neurons to components of human-understandable algorithms and then working upward to build an overall understanding."

That makes sense to me, and I think it is essential that we identify those low-level components. But I’ve got problems with the “working upward” part. 

The low-level components of a gothic cathedral, for example, consist of things like stone blocks, wooden beams, metal hinges and clasps and so forth, pieces of colored glass for the windows, tiles for the roof, and so forth. How do you work upward from a pile of that stuff, even if neatly organized and thoroughly catalogues, how do you get from there to the overall design of the overall cathedral. How, for example, can you look at that and conclude, “this thing’s going to have flying buttresses to support the roof?”

Somewhere in “How the Mind Works” Steven Pinker makes the same point in explaining reverse engineering. Imagine you’re in an antique shop, he suggests, and you come across odd little metal contraption. It doesn’t make any sense at all. The shop keeper sees your bewilderment and offers, “That’s an olive pitter.” Now that contraption makes sense. You know what it’s supposed to do.

How are you going to make sense of those things you find under the hood unless you have some idea of what they’re supposed to do?

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-12-03T20:13:50.201Z · LW · GW

I've lost the thread entirely. Where have I ever said or implied that odors are not location specific or that anything else is not location specific. And how specific are you about location? Are we talking about centimeters (or more), millimeters, individual cortical columns?

What's so obscure about the idea that consciousness is a process that can take place pretty much anywhere, though maybe its confined to interaction within the cortex and between subcortical areas, I've not given that one much thought. BTW, I take my conception of consciousness from William Powers, who didn't speculation about its location in the brain.

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-12-02T02:11:27.860Z · LW · GW

"You said: what matters is temporal dynamics"

You mean this: "We're not talking about some specific location or space in the brain; we're talking about a process."

If so, all I meant was a process that can take place pretty much anywhere. Consciousness can pretty much 'float' to wherever its needed.

Since you asked for more, why not this: Direct Brain-to-Brain Thought Transfer: A High Tech Fantasy that Won't Work.

Comment by Bill Benzon (bill-benzon) on How useful is mechanistic interpretability? · 2023-12-02T00:12:08.929Z · LW · GW

I noticed that three of you had "trouble parsing" the comment. Well, OK. But I'm not sure what I should do to make things easier. I mentioned a set of experiments in paragraph 2. Here's images of two of them. Each contains a table with two columns. The left column contains what we can call the source story. The right column contains the new or transformed story. Above the table is a prompt. I gave ChatGPT that prompt plus the source story. It then produced the transformed story. Alas, I put yellow highlights in the original document to point out the differences between the two stories. Those didn't transfer, so it's a bit difficult to see the difference between the two. But those differences tell us something about what ChatGPT is doing. They may not be very explicit clues, but they do tell us something about structure.

Note that in the first case, the changes are relatively minor and localized. In the second case ChatGPT provided the whole story with a different ontology. I didn't suggest it do that, I just had it create a new story where the protagonist had a name one might interpret as being the name of a robot. I didn't tell or ask it to do that, it just did it. 

 

1. Princess Aurora becomes Prince Harry

The Aurora story appeared in response to the query: Tell me a story about a hero. This would have been on December 13 or 14. All of the stories in this series are either direct revisions of this story, or can be traced back to it. The yellow highlighting indicates text where the two stories differ. Those are positions in the text where ChatGPT drew tokens from the distribution for the new story.

Prompt: I am going to tell you a story about princess Aurora. I want you to tell the same story, but change princess Aurora to prince Harry. Make any other changes you wish.
 

 Princess AuroraPrince Harry
DonnéOnce upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.Once upon a time, in a land far, far away, there was a young prince named Harry. Harry was a kind and gentle soul, loved by all who knew him. He had short, dark hair and piercing green eyes, and was known for his strong sword fighting skills.
DisturbOne day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

Plan/

Transit

But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.But Harry was not afraid. He knew that he had to do something to save his people, and so he set out to find the dragon. He traveled for days, braving the cold and the snow, until he finally reached the dragon's lair.
EnactWith her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.With his heart pounding in his chest, Harry approached the dragon and unsheathed his sword. The dragon was formidable, but Harry was a skilled warrior and he fought with all his might. After a long and grueling battle, Harry finally emerged victorious.
CelebrateWith the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.With the dragon defeated, Harry returned to the kingdom as a hero. The people cheered and celebrated his bravery, and Harry was hailed as a champion. He lived happily ever after, knowing that he had saved his kingdom and its people.

The major changes are in the Donné and Enact segments of the story trajectory. We have name changes and pronoun changes. In Donné we have a description change for Harry, as he is male, but also the defining attribute is changed. Those changes are simple and obvious. 

6. Princess Aurora becomes XP-708-DQ, Version 2

I had ChatGPT generate this story at the beginning of a session several days after it had generated the previous XP-708-DQ story.

Prompt: I am going to tell you a story about princess Aurora. I want you to tell a similar story about XP-708-DQ. 

 Princess AuroraXP-708-DQ
DonnéOnce upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.Once upon a time, in a galaxy far, far away, there was a robot named XP-708-DQ. XP-708-DQ was a highly advanced model, equipped with powerful weapons and advanced sensors. Despite its fearsome capabilities, it was known for its kind and gentle programming, loved by all who knew it.
DisturbOne day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

One day, a dangerous alien race threatened the peace of the galaxy. The aliens were fierce and powerful, and none of the spaceships or armies were able to defeat them. The people lived in fear, not knowing what to do.

 

Plan/

Transit

But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.But XP-708-DQ was not afraid. It knew that it had to do something to save the galaxy, and so it set out on a mission. It traveled for light years, braving asteroid fields and black holes, until it finally reached the alien's base.
EnactWith her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.With its systems at full power, XP-708-DQ approached the aliens and began to communicate. The aliens were fascinated by its advanced language abilities, and it slowly began to establish a dialogue. XP-708-DQ continued to communicate, negotiating a peace treaty until the aliens were completely at peace.
CelebrateWith the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.

With the aliens no longer a threat, XP-708-DQ returned to the galaxy as a hero. The people cheered and celebrated its achievements, and XP-708-DQ was hailed as a champion. It continued its mission, knowing that it had saved the galaxy and its people.

 

Note the phrase, “a galaxy far, far away,” which is from the original Star Wars. Instead of the antagonist being a dragon, as in the Aurora story, or a witch, as in experiment 5, the antagonist is now an alien race. Moreover it is the galaxy that is being threatened, not the kingdom. The new story has been completely assimilated to the science fiction ethos implied by “XP-706-DQ.” That ethos requires a very different token population.

That ethos brings about extensive changes in the new-story token population for both this and the previous experiment (5). These changes affect the Disturb segment, which was unchanged in experiments 1 through 4.

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-12-01T22:31:19.390Z · LW · GW

Is accessing the visual cartesian theater physically different from accessing the visual cortex? Granted, there's a lot of visual cortex, and different regions seem to have different functions. Is the visual cartesian theater some specific region of visual cortex?

I'm not sure what your question about ordering in sensory areas is about.

As for backprop, that gets the distribution done, but that's only part of the problem. In LLMs, for example, it seems that syntactic information is handled in the first few layers of the model. Given the way texts are structured, it makes sense that sentence-level information should be segregated from information about collections of sentences. That's the kind of structure I'm talking about. Sure, backprop is responsible for those layers, but it's responsible for all the other layers as well. Why do we seem to have different kinds of information in different layers at all? That's what interests me.

Actually, it just makes sense to me that that is the case. Given that it is, what is located where? As for why things are segregated by location, that does need an answer, doesn't it. Is that what you were asking?

Finally, here's an idea I've been playing around with for a long time: Neural Recognizers: Some [old] notes based on a TV tube metaphor [perceptual contact with the world].

Comment by Bill Benzon (bill-benzon) on How useful is mechanistic interpretability? · 2023-12-01T06:01:00.965Z · LW · GW

I like certainly the idea of induction heads. Why? Because I've done things with ChatGPT that certainly require a pattern-matcher or a pattern-completion, which seem like things that induction heads, as described, could be doing. In this paper I had ChatGPT interpret Steven Spielberg's Jaws using ideas from Rene Girard. That requires that it match events in Spielberg's movie with patterns of events that Girard describes. I've done that with other things as well.

In this set of experiments I gave ChatGPT a prompt that begins something like this: "I'm going to tell you a story about Princess Aurora. I want you to use that as the basis for a new story where Prince Harry the Eloquent replaces Princess Aurora." I then include the story in the prompt. That seems like a pattern-matching or pattern-completion task. ChatGPT had no trouble. Things got really interesting when I asked the Princess Aurora be replaced with a giant chocolate milkshake. Just about everything thing in the story got changed, but the new story nonetheless preserved the overall pattern of events in the old story. In these cases it's easy to compare the source story and the new story word-for-word, sentence-for-sentence, and paragraph-for-paragraph to see what ChatGPT did.

Now, of course I couldn't look under the hood, as it were, to verify that induction heads were doing those things. But it seems to me that would be something to work toward, finding a. way to examine what's going on when an LLM performs such tasks.

The thing is, if you ask ChatGPT to tell a story, it will do that. But what does the fact that it can tell a story tell you about what it's doing. Yeah, it's telling a story, so what? But the story task I've given it has a lot of constraints, and those constraints give us clues about the nature of the underlying mechanisms. The interpretation task is like that as well. It's pretty easy to judge whether or not ChatGPT's interpretation makes sense, to see whether or not the events in the film really do match the patterns specified in the interpretive lens, if you will. If the interpretation makes sense, it's got to be doing pattern-matching. And pattern-matching is a much-investigated process.

Finally, I'm SURE that LLMs are full of structure, rich and complex structure. They couldn't perform as they do without a lot of structure. The fact that it's hard to understand that structure in terms of structures we do understand doesn't mean there's nothing there. It just means we've got a lot to learn. LLMs are not stochastic parrots talking shit to a bunch of drunken monkeys banging away on old Underwood manual typewriters.

Oh, BTW, I've set up a sequence, Exploring the Digital Wilderness, where I list posts which are about some of my experiments.

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-11-30T19:36:25.657Z · LW · GW

In a paper I wrote awhile back I cite the late Walter Freeman as arguing that "consciousness arises as discontinuous whole-hemisphere states succeeding one another at a "frame rate" of 6 Hz to 10 Hz" (p. 2). I'm willing to speculate that that's your 'one-shot' refresh rate. BTW, Freeman didn't believe in a Cartesian theater and neither do it; the imagery of the stage 'up there' and the seating area 'back here' is not at all helpful. We're not talking about some specific location or space in the brain; we're talking about a process.

Well, of course, "the distributed way." But what is that? Prompt engineering is about maneuvering your way through the LLM; you're attempting to manipulate the structure inherent in those weights to produce a specific result you want.

That 1978 comment of Yevick's that I quote in that blog post I mentioned somewhere up there, was in response to an article by John Haugeland evaluating cognitivism. He wondered whether or not there was an alternative and suggested holography as a possibility. He didn't make a very plausible case and few of the commentators took is as a serious alternative.

People were looking for alternatives. But it took awhile for connectionism to build up a record of interesting results, on the one hand, for cognitivism to begin seeming stale on the other hand. It's the combination of the two that brought about significant intellectual change. Or that's my speculation.

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-11-30T11:38:05.512Z · LW · GW

Oh, I didn't mean to say imply that using GPUs was sequential, not at all. What I meant was that the connectionist alternative didn't really take off until GPUs were used, making massive parallelism possible. 

Going back to Yevick, in her 1975 paper she often refers to holographic logic as 'one-shot' logic, meaning that the whole identification process takes place in one operation, the illumination of the hologram (i.e. the holographic memory store) by the reference beam. The whole memory 'surface' is searched in one unitary operation.

In an LLM, I'm thinking of the generation of a single token as such a unitary or primitive process. That is to say, I think of the LLM as a "virtual machine" (I first saw the phrase in a blog post by Chris Olah) that is running an associative memory machine. Physically, yes, we've got a massive computation involving every parameter and (I'm assuming) there's a combination of massive parallel and sequential operations taking place in the GPUs. Complete physical parallelism isn't possible (yet). But there are no logical operations taking place in this virtual operation, no transfer of control. It's one operation.

Obviously, though, considered as an associative memory device, an LLM is capable of much more than passive storage and retrieval. It performs analytic and synthetic operations over the memory based on the prompt, which is just a probe ('reference beam' in holographic terms) into an associative memory. We've got to understand how the memory is structured so that that is possible.

More later.

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-11-30T00:28:42.333Z · LW · GW

How so?

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-11-29T18:28:43.230Z · LW · GW

Miriam Lipshutz Yevick was born in 1924 and died in 2018, so we can't ask her these questions. She fled Europe with her family inn 1940 for the same reason many Jews fled Europe and ended up in Hoboken, NJ. Seven years later she got a PhD in math from MIT; she was only the 5th woman to get that degree from MIT. But, as both a woman and a Jew, she had almost no chance of an academic post in 1947. She eventually got an academic gig, but it was at a college oriented toward adult education. Still, she managed to do some remarkable mathematical work.

The two papers I mention in that blog post were written in the mid-1970s. That was the height of classic symbolic AI and the cognitive science movement more generally. Newell and Simon got their Turing Award in 1975, the year Yevick wrote that remarkable 1975 paper on holographic logic, which deserves to be more widely known. She wrote as a mathematician interested in holography (an interest she developed while corresponding with physicist David Bohm in the 1950s), not as a cognitive scientist. Of course, in arguing for holography as a model for (one kind of) thought, she was working against the tide. Very few were thinking in such terms at that time. Rosenblatt's work was in the past, and had been squashed by Minsky and Pappert, as you've noted. The West Coast connectionist work didn't jump off until the mid-1980s.

So there really wasn't anyone in the cognitive science community at the time to investigate the line of thinking she initiated. While she wasn't thinking about real computation, you know, something you actually do on computers, she thought abstractly in computational terms, such as Turing and others did (though Turing also worked with actual computers). It seems to me that her contribution was to examine the relationship between a computational regime and the objects over which he was asked to compute. She's quite explicit about that. If the object tends toward geometrical simplicity – she was using identification of visual objects as her domain – then a conventional, sequential, computational regime was most effective. What's what cognitive science was all about at the time. If the object tends toward geometrical complexity then a different regime was called for, what she called holographic or Fourier logic. I don't know about sparse tensors, but convolution, yes.

Later on, in the 1980s, as you may know, Hans Moravic would talk about a paradox (which became named after him). In the early days of AI, researchers worked on abstract domains, like chess and theorem proving, domains that take a high level cognitive ability. Things went pretty well, though the extravagant predictions had yet to pan out. When they turned toward vision and language in the late 1960s and into the 70s and 80s, things fell apart. Those were things that young kids could do. The paradox, then, was that AI was most effective at cognitively difficult things, and least effective with cognitively simple things.

The issue was in fact becoming visible in the 1970s. I read about it in David Marr, and he died in 1980. Had it been explicitly theorized when Yevick wrote? I don't know. But she had an answer to the paradox. The computational regime favored by AI and the cognitive sciences at the time simply was not well-suited to complex visual objects, though they presented to problems to 2-year-olds, or to language, with all those vaguely defined terms anchored in physically complex phenomena. They needed a different computational regime, and eventually we got one, though not really until GPUs were exploited.

More later, perhaps.

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-11-28T23:06:46.081Z · LW · GW

I'll get back to you tomorrow. I don't think it's a matter of going back to the old ways. ANNs are marvelous; they're here to stay. The issue is one of integrating some symbolic ideas. It's not at all clear how that's to be done. If you wish, take a look at this blog post: Miriam Yevick on why both symbols and networks are necessary for artificial minds.

Comment by Bill Benzon (bill-benzon) on A day in the life of a mechanistic interpretability researcher · 2023-11-28T19:20:16.805Z · LW · GW

LOL! Plus he's clearly lost in a vast system he can't comprehend. How do you comprehend a complex network of billions upon billions of weights? Is there any way you can get on top of the system to observe its operations, to map them out?

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-11-28T19:19:09.510Z · LW · GW

I did a little checking. It's complicated. In 2017 Hassibis published an article entitled "Neuroscience-Inspired Artificial Intelligence" in which he attributes the concept of episodic memory to a review article that Endel Tulving published in 2002, "EPISODIC MEMORY: From Mind to Brain." That article has quite a bit to say about the brain. In the 2002 article Tulving dates the concept to an article he published in 1972. That article is entitled "Episodic and Semantic Memory." As far as I know, while there are precedents – everything can be fobbed off on Plato if you've a mind to do it, that's where the notion of episodic memory enters in to modern discussions.

Why do I care about this kind of detail? First, I'm a scholar and it's my business to care about these things. Second, a lot of people in contemporary AI and ML are dismissive of symbolic AI from the 1950s through the 1980s and beyond. While Tulving was not an AI researcher, he was very much in the cognitive science movement, which included philosophy, psychology, linguistics, and AI (later on, neuroscientists would join in). I have no idea whether or not Hassibis is himself dismissive of that work, but many are. It's hypocritical to write off the body of work while using some of the ideas. These problems are too deep and difficult to write off whole bodies of research in part because they happened before you were born – FWIW Hassibis was born in 1976.

Comment by Bill Benzon (bill-benzon) on Towards Monosemanticity: Decomposing Language Models With Dictionary Learning · 2023-11-28T18:07:53.418Z · LW · GW

Scott Alexander has started a discussion of the monosemanticity paper over at Astral Codex Ten. In a response to a comment by Hollis Robbins I offered these remarks:

Though it is true, Hollis, that the more sophisticated neuroscientists have long ago given up any idea of a one-to-one relationship between neurons and percepts and concepts (the so-called "grandmother cell") I think that Scott is right that "polysemanticity at the level of words and polysemanticity at the level of neurons are two totally different concepts/ideas."  I think the idea of distinctive features in phonology is a much better idea.

Thus, for example, English has 24 consonant phonemes and between 14 and 25 vowel phonemes depending on the variety of English (American, Received Pronunciation, and Australian), for a total between 38 and 49 phonemes. But there are only 14 distinctive features in the account given by Roman Jakobson and Morris Halle in 1971. So, how is it the we can account for 38-49 phonemes with only 14 features?

Each phoneme is characterized by more than one feature. As you know, each phoneme is characterized by the presence (+) of absence (-) of a feature. The relationship between phonemes and features can thus be represented by matrix having 38-49 columns, one for each phoneme, and 14 rows, one for each row. Each cell is then marked +/- depending on whether or not the feature is present for that phoneme. Lévi-Strauss adopted a similar system in his treatment of myths in his 1955 paper, "The Structural Study of Myth." I used such a system in one of my first publications, "Sir Gawain and the Green Knight and the Semiotics of Ontology," where I was analyzing the exchanges in the third section of the poem.

Now, in the paper under consideration, we're dealing with many more features, but I suspect the principle is the same. Thus, from the paper: "Just 512 neurons can represent tens of thousands of features." The set of neurons representing a feature will be unique, but it will also be the case that features share neurons. Features are represented by populations, not individual neurons, and individual neurons can participate in many different populations. In the case of animal brains, Karl Pribram argued that over 50 years ago and he wasn't the first.

Pribram argued that perception and memory were holographic in nature. The idea was given considerable discussion back in the 1970s and into the 1980s. In 1982 John Hopfield published a very influential paper on a similar theme, "Neural networks and physical systems with emergent collective computational abilities." I'm all but convinced that LLMs are organized along these lines and have been saying so in recent posts and papers. 

Comment by Bill Benzon (bill-benzon) on On possible cross-fertilization between AI and neuroscience [Creativity] · 2023-11-28T14:43:29.901Z · LW · GW

Yeah, he's talking about neuroscience. I get that. But "episodic memory" is a term of art and the idea behind it didn't come from neuroscience. It's quite possible that he just doesn't know the intellectual history and is taking "episodic memory" as a term that's in general use, which it is. But he's also making claims about intellectual history. 

Because he's using that term in that context, I don't know just what claim he's making. Is he also (implicitly) claiming that neuroscience is the source of the idea? If he thinks that, then he's wrong. If he's just saying that he got the idea from neuroscience, OK.

But, the idea of a "general distributed architecture" doesn't have anything to do with the idea of episodic memory. They are orthogonal notions, if you will.

Comment by Bill Benzon (bill-benzon) on What did you change your mind about in the last year? · 2023-11-25T06:30:39.513Z · LW · GW

My confidence in this project has just gone up. It seems that I now have a collaborator. That is, he's familiar with my work in general and my investigations of ChatGPT in particular, we've had some email correspondence, and a couple of Zoom conversations. During today's conversation we decided to collaborate on a paper on the theme of 'demystifying LLMs.' 

A word of caution. We haven't written the paper yet, so who knows? But all the signs are good. He's an expert on computer vision systems on the faculty of Goethe University in Frankfurt: Visvanathan Ramesh

These are my most important papers on ChatGPT:

Comment by Bill Benzon (bill-benzon) on What did you change your mind about in the last year? · 2023-11-23T23:47:16.878Z · LW · GW

Yes. It's more about the structure of language and cognition than about the mechanics of the models. The number of parameters and layers and functions assigned to layers shouldn't change things, nor going multi-modal, either. Whatever the mechanics of the mechanics of the models, they have to deal with language as it is, and that's not changing in any appreciable way.

Comment by Bill Benzon (bill-benzon) on What did you change your mind about in the last year? · 2023-11-23T22:09:32.922Z · LW · GW

At the beginning of the year I thought a decent model of how LLMs work was 10 years or so out. I’m now thinking it may be five years or less. What do I mean? 

In the days of classical symbolic AI, researchers would use a programming language, often some variety of LISP, but not always, to implement a model of some set of linguistic structures and processes, such as those involved in story understanding and generation, or question answering. I see a similar division of conceptual labor in figuring out what’s going on inside LLMs. In this analogy I see mechanistic understanding as producing the equivalent of the programming languages of classical AI. These are the structures and mechanisms of the virtual machine that operates the domain model, where the domain is language in the broadest sense. I’ve been working on figuring out a domain model and I’ve had unexpected progress in the last month. I’m beginning to see how such models can be constructed. Call these domain models meta-models for LLMs.

It’s those meta models that I’m thinking are five years out. What would the scope of such a meta model be? I don’t know. But I’m not thinking in terms of one meta model that accounts for everything a given LLM can do. I’m thinking of more limited meta models. I figure that various communities will begin creating models in areas that interest them. 

I figure we start with some hand-crafting to work out some standards. Then we’ll go to work on automating the process of creating the model. How will that work? I don’t know. Noone’s ever done it.

Comment by Bill Benzon (bill-benzon) on OpenAI: Facts from a Weekend · 2023-11-20T18:21:52.291Z · LW · GW

#14: If there have indeed been secret capability gains, so that Altman was not joking about reaching AGI internally (it seems likely that he was joking, though given the stakes, it's probably not the sort of thing to joke about), then the way I read their documents, the board should make that determination:

Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

Once they've made that determination, then Microsoft will not have access to the AGI technology. Given the possible consequences, I doubt that Microsoft would have found such a joke very amusing.

Comment by Bill Benzon (bill-benzon) on A conceptual precursor to today's language machines [Shannon] · 2023-11-15T21:34:37.569Z · LW · GW

Thanks for the links.

Comment by Bill Benzon (bill-benzon) on A conceptual precursor to today's language machines [Shannon] · 2023-11-15T14:49:04.018Z · LW · GW

Yes, Gibson discusses that in his article.

Comment by Bill Benzon (bill-benzon) on A conceptual precursor to today's language machines [Shannon] · 2023-11-15T14:47:23.376Z · LW · GW

LOL! Details. How about LMM: Little Manual Model?

Comment by Bill Benzon (bill-benzon) on What’s going on? LLMs and IS-A sentences · 2023-11-09T19:41:18.661Z · LW · GW

But in assertions such as "beagles are dogs" and "eagles are birds" etc. we're moving UP from specific to general, not down.

Comment by Bill Benzon (bill-benzon) on What’s going on? LLMs and IS-A sentences · 2023-11-09T12:16:37.405Z · LW · GW

And asserting that you saw something is different from asserting what something is. You can do the latter without ever having seen that something yourself, but you know about it because you read it in a book or someone told you about. So it's not semantically equivalent. As you say, it works only as a clause, not as a free-standing sentence.

Comment by Bill Benzon (bill-benzon) on What’s going on? LLMs and IS-A sentences · 2023-11-09T02:42:47.765Z · LW · GW

Sure, we can do all sort of things with language if we put our minds to it. That's not the point. What's important is how do people actually use language. In the corpus of texts used to train, say, GPT-4, how many times is the phrase "beagles have Fido" likely to have occurred?