Could LLMs Help Generate New Concepts in Human Language?
post by Pekka Lampelto · 2024-03-24T20:13:02.654Z · LW · GW · No commentsThis is a question post.
Contents
Some questions arose from this thought experiment: About the the author (me): None Answers 7 NicholasKees 1 metachirality None No comments
Concepts are the central part of language. I would argue that concepts serve as condensed linguistic representations of concrete or abstract entities, aimed at enhancing the precision and efficiency of thinking and communication.
I found it fascinating to ponder how new concepts have emerged in human language. One could argue that the entire history of the development of human language is essentially a series of births of new concepts and their enrichment and connotation with existing concepts. Thus, the development of human language is in one part the development of concept systems.
When the basis of the language had already been born, people arguably had the need to conceptualize and name more and more things in the world around them.
Let's time travel to the Stone Age. When the basis of the language had already been born, people arguably had the need to conceptualize and name more and more things in the world around them. Some of the first concepts and words probably were related to concrete things like family members ("mother", "father", "brother", " "sister" and "child"), animals ("man", "bear", "cat", "dog") and tools ("stick", and "axe"). Those then were followed by perhaps a slightly more abstract concepts such as actions and verbs like "sleeping", "eating", "talking", "jumping". As thinking and language developed further, yet more challenging and abstract concepts arose, for example, those related to feelings and intentions such as "joy ", "love", "sorrow", "hate" and community roles such as "chief", "shaman". Once the agrarian revolution and organized society emerged, a need arose for even more abstract concepts such as "nation", "story", "culture", "rules" , "law", "religion" and "war". Even later, as societies grew and developed, whole systematically organized areas of knowledge emerged, such as "administration", "economy", "the art of warfare", "mathematics", "philosophy", and "astronomy".
The creation and naming of concepts can be thought as innovations in language to fulfill thinking and communication gaps.
It could be argued that concepts and their names have arisen out of need as the human environment changed, culture, and thinking developed. The creation and naming of concepts can be thought as innovations in language to fulfill thinking and communication gaps. These gaps in linguistic thinking can be almost said to summon the birth of a new concept and word.
A question arose in my mind: Could large language models help humanity recognize and even name new valuable concepts? I decided to try, whether ChatGPT-4 could be used for coming up of new concepts and words.
Prompt: "Invent a whole new useful concept like 'human rights', 'religion', 'principle', 'technology', or 'principle' and explain it."
Some ChatGPT-4 answers:
1. Internal networking: Internal networking is a method of personal and professional growth in which an individual focuses on building deep and meaningful relationships in his own life circle, such as with family, friends, colleagues and local networks. It differs from traditional networking, which emphasizes the creation of new contacts and often expands the social or professional circle outward.
2. Thought Ecology: Thought ecology is an approach to mental and emotional well-being that focuses on the "ecological" balance of a person's world of thought. It refers to how an individual's thoughts, beliefs, attitudes and feelings affect each other and form a complex, dynamic system that can affect a person's overall well-being.
In my opinion, while quite interesting, these concepts lacked significant novelty, and the words themselves were compound words, as often newly created words nowadays typically are formed by combining two existing words into one.
Some questions arose from this thought experiment:
- In terms of LLM architecture, do transformer-based LLMs have the ability to invent new, genuinely useful concepts?
- How could we model the gaps in the conceptual systems of our language that "cry out" for new concepts to complement our conceptual systems?
***
About the the author (me):
- This is my first post.
- About my background: I did a Mmaster's thesis in 2013 (Masters in Economics and Business Administration) on knowledge work and its automation with AI. Case study tech was IBM Watson): https://trepo.tuni.fi/handle/10024/94436?show=full
- I work in a data compliance SaaS company, which offers a tool for documenting and managing data protection and AI regulation requirements and risks. The documentation relies particularly on data-flow maps, assets, risks and controls.
Answers
Recently @Joseph Bloom [LW · GW] was showing me Neuronpedia which catalogues features found in GPT-2 by sparse autoencoders [LW · GW], and there were many features which were semantically coherent, but I couldn't find a word in any of the languages I spoke that could point to these concepts exactly. It felt a little bit like how human languages often have words that don't translate, and this made us wonder whether we could learn useful abstractions about the world (e.g. that we actually import into English) by identifying the features being used by LLMs.
↑ comment by Viliam · 2024-03-25T16:59:06.911Z · LW(p) · GW(p)
I was going to ask for interesting examples. But perhaps we can do even better and choose examples with the highest value of... uhm... something.
I am just wildly guessing here, but it seems to me that if these features are somehow implied by the human text, the ones that are "implied most strongly" could be the most interesting ones. Unless they are just random artifacts of the process of learning.
If we trained the LLM using the same text database, but randomly arranged the sources, or otherwise introduced some noise, would the same concepts appear?
↑ comment by NicholasKees (nick_kees) · 2024-03-25T10:51:17.222Z · LW(p) · GW(p)
In terms of LLM architecture, do transformer-based LLMs have the ability to invent new, genuinely useful concepts?
So I'm not sure how well the word "invent" fits here, but I think it's safe to say LLMs have concepts that we do not.
janus developed Simulators after messing around with language models and identifying an archetype common to many generations called Morpheus which seems to represent the simulator.
No comments
Comments sorted by top scores.