Recent advances in Natural Language Processing—Some Woolly speculations (2019 essay on semantics and language models)

post by philosophybear · 2022-12-27T02:11:36.960Z · LW · GW · 0 comments

Contents

  1. Recent achievements in Natural Language Processing
  2. The rate of progress
  3. Meditations on chess
  4. Postscript—The Chinese Room Argument
None
No comments

This essay was written back in 2019, not too long after GPT-2 came out. Although parts of it are dated- mostly the list of new achievements in Natural Language Processing- on the whole, it’s held up really well and outlines the core of my view on questions regarding semantics, philosophy of language, and natural language processing. I thought it was quite forgotten, but I saw recently that an essay by Dragon God that mentioned the core of the idea in passing: [LW · GW]

“Premise 1: Modelling is transitive. If X models Y and Y models Z, then X models Z.

Premise 2: Language models reality. "Dogs are mammals" occurs more frequently in text than "dogs are reptiles" because dogs are in actuality mammals and not reptiles. This statistical regularity in text corresponds to a feature of the real world. Language is thus a map (albeit flawed) of the external world.

Premise 3: GPT-3 models language. This is how it works to predict text.”

Which is the argument of the text- indeed almost the same words in parts. Perhaps the text is outdated now, but if so, I’d like ot think it’s because the ideas within it have entered the water.

I’m putting my existing work on AI on Less Wrong, and editing as I go, in preparation to publishing a collection of my works on AI in a free online volume. If this content interests you, you could always follow my Substack, it's free and also under the name Philosophy Bear.

1. Recent achievements in Natural Language Processing

[2022 Edit: Contemporary readers could skip this section, although it may be a useful reminder of how quickly things have moved since then.]

Natural Language Processing (NLP) per Wikipedia:

“Is a sub-field of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.”

The field has seen tremendous advances during the recent explosion of progress in machine learning techniques.

Here are some of its more impressive recent achievements:

A) The Winograd Schema is a test of common sense reasoning—easy for humans, but historically almost impossible for computers—which requires the test taker to indicate which noun an ambiguous pronoun stands for. The correct answer hinges on a single word, which is different between two separate versions of the question. For example: The city councilmen refused the demonstrators a permit because they feared violence. The city councilmen refused the demonstrators a permit because they advocated Violence. Who does the pronoun “They” refer to in each of the instances?

The Winograd schema test was originally intended to be a more rigorous replacement for the Turing test, because it seems to require deep knowledge of how things fit together in the world, and the ability to reason about that knowledge in a linguistic context. Recent advances in NLP have allowed computers to achieve near human scores, see the GLUE benchmark leaderboard for details

B) The New York Regent’s science exam is a test requiring both scientific knowledge and reasoning skills, covering an extremely broad range of topics. Some of the questions include:

1.Which equipment will best separate a mixture of iron filings and black pepper? (1) magnet (2) filter paper (3) triplebeam balance (4) voltmeter 

2. Which form of energy is produced when a rubber band vibrates? (1) chemical (2) light (3) electrical (4) sound

3. Because copper is a metal, it is (1) liquid at room temperature (2) nonreactive with other substances (3) a poor conductor of electricity (4) a good conductor of heat

4. Which process in an apple tree primarily results from cell division? (1) growth (2) photosynthesis (3) gas exchange (4) waste removal

On the 8th grade, non-diagram based questions of the test, a program was recently able to score 90%.

C) It’s not just about answer selection either. Progress in text generation has been impressive. Google, for example, some of the text samples created by Megatron.

2. The rate of progress

Much of this progress has been rapid. Big progress on the Winograd schema, for example, still looked like it might be decades away back in (from memory) much of 2018. The computer science is advancing very fast, but it’s not clear our concepts have kept up.

I found this relatively sudden progress in NLP surprising. In my head—and maybe this was naive—I had thought that, in order to attempt these sorts of tasks with any facility, it wouldn’t be sufficient to simply feed a computer lots of text. Instead, any “proper” attempt to understand language would have to integrate different modalities of experience and understanding, like visual and auditory, in order to build up a full picture of how things relate to each other in the world. Only on the basis of this extra-linguistic grounding could it deal flexibly with problems involving rich meanings—we might call this the multi-modality thesis. Whether the multi-modality thesis is true for some kinds of problems or not, it’s certainly true for far fewer problems than I, and many others, had suspected.

I think science-fictiony speculations generally backed me up on this (false) hunch. Most people imagined that this kind of high-level language “understanding” would be the capstone of AI research, the thing that comes after the program already has a sophisticated extra-linguistic model of the world. This sort of just seemed obvious—a great example of how assumptions you didn’t even know you were making can ruin attempts to predict the future.

In hindsight it makes a certain sense that reams and reams of text alone can be used to build the capabilities needed to answer questions like these. A lot of people remind us that these programs are really just statistical analyses of the co-occurence of words, however complex and glorified. However we should not forget that the statistical relationships between words in a language are isomorphic to the relations between things in the world—that isomorphism is why language works. This is to say the patterns in language use mirror the patterns of how things are(1). Models are transitive—if x models y, and y models z, then x models z. The upshot of these facts are that if you have a really good statistical model of how words relate to each other, that model is also implicitly a model of the world, and so we shouldn't surprised that such a model grants a kind of "understanding" about how the world works.

[2022 Edit: for a striking example of how the patterns in language mirror the patterns of things in the world, see Word2vec, which only considers the simplest possible pattern- co-occurence- but still captures a vast amount of information about how items relate to each other. Indeed, it has been shown that patterns of co-occurence contain such rich data that it is possible to translate the words of one language into the words of another language without any parallel texts at all, using nothing but comparison of the patterns of how words in each language occur near words in their own language. https://arxiv.org/abs/1710.04087 ]

It might be instructive to think about what it would take to create a program which has a model of eighth grade science sufficient to understand and answer questions about hundreds of different things like “growth is driven by cell division”, and “What can magnets be used for” that wasn’t NLP led. It would be a nightmare of many different (probably handcrafted) models. Speaking somewhat loosely, language allows for intellectual capacities to be greatly compressed- that's why it works. From this point of view, it shouldn’t be surprising that some of the first signs of really broad capacity—common sense reasoning, wide ranging problem solving etc., have been found in language based programs—words and their relationships are just a vastly more efficient way of representing knowledge than the alternatives.

So I find myself wondering if language is not the crown of general intelligence, but a potential shortcut to it.

3. Meditations on chess

A couple of weeks ago I finished this essay, read through it, and decided it was not good enough to publish. The point about language being isomorphic to the world, and that therefore any sufficiently good model of language is a model of the world, is important, but it’s kind of abstract, and far from original.

Then today I read a report by Scott Alexander of having trained GPT-2 (a language model) to play chess. I realised this was the perfect example. GPT-2 has no (visual) understanding of things like the arrangement of a chess board. But if you feed it enough sequences of alphanumerically encoded games—1.Kt-f3, d5 and so on—it begins to understand patterns in these strings of characters which are isomorphic to chess itself. Thus, for all intents and purposes, it develops a model of the rules and strategy of chess in terms of the statistical relations between linguistic objects like "d5", "Kt" and so on. In this particular case, the relationship is quite strict and invariant- the "rules" of chess become the "grammar" of chess notation.

Exactly how strong this approach is—whether GPT-2 is capable of some limited analysis, or can only overfit openings—remains to be seen. We might have a better idea as it is optimized — for example, once it is fed board states instead of sequences of moves. Either way though, it illustrates the point about isomorphism.

Of course everyday language stands in a woollier relation to sheep, pine cones, desire and quarks than the formal language of chess moves stands in relation to chess moves, and the patterns are far more complex. Modality, uncertainty, vagueness and other complexities enter- not to mention people asserting false sentences all the time- but the isomorphism between world and language is there, even if inexact. “Snow is white” is isomorphic to snow being white. “There are two pine cones on the table” is isomorphic to the number and relationship of the pinecones to the table.

[2022 edit: N.B. Years after I published this essay, a more thorough study of the ability of GPT-2 to know the structure of the chess board and the positions of the pieces came out: “Chess as a Testbed for Language Model State Tracking”. I will refer to this study again in a later essay in this volume.]

4. Postscript—The Chinese Room Argument

After similar arguments are made, someone usually mentions the Chinese room thought experiment. There are, I think, two useful things to say about it:

A) The thought experiment is an argument about understanding in itself, separate from capacity to handle tasks, a difficult thing to quantify or grasp. It’s unclear that there is a practical upshot for what AI can actually do.

B) A lot of the power of the thought experiment hinges on the fact that the room solves questions using a lookup table, this stacks the deck. Perhaps we be more willing to say that the room as a whole understood language if it formed an (implicit) model of how things are, and of the current context, and used those models to answer questions? Even if this doesn’t deal with all the intuition that the room cannot understand Chinese, I think it takes a bite from it (Frank Jackson, I believe, has made this argument).

[Edit: We will revisit the Chinese Room argument later in this volume.]

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

(1)—Strictly of course only the patterns in true sentences mirror, or are isomorphic to,the arrangement of the world, but most sentences people utter are at least approximately true.


 

0 comments

Comments sorted by top scores.