Types of recursion

post by AnthonyC · 2013-09-04T17:48:55.709Z · LW · GW · Legacy · 16 comments

As a freshman in college I took an intro linguistics class where we spent a lot of time discussing universal grammar and recursive phrase structures. One of the examples we looked at I still don't fully understand - it illustrated two distinct forms of nested phrases that he mind handles very differently.

1. Nested prepositional phrases

The car in the driveway of the house on the street in NY...

I can make that sentence go on indefinitely, and while a reader (or listener) might get bored or forget parts, it will never feel confusing. It's just (The car (in the driveway (of the house (on the street (...))))).

2. Nested tense phrases

The mouse the cat the dog the man walked barked at chased ate the cheese.

Yes, it's grammatical. The mouse ate the cheese. (The mouse the cat chased) ate the cheese.  The mouse (the cat the dog barked at) chased ate the cheese. The mouse the cat (the dog the man walked) barked at chased at the cheese.

Personally, I lose track with the introduction of the dog. At first I thought it was just a matter of working memory, but the information content is not that high. I can even turn it back into the first kind of recursion and then suddenly have no difficulty keeping it all in my head: The man walked the dog that barked at the cat that chased the mouse that ate the cheese. It seems to be more a bug in my natural language processing module. 

Any suggestions on what might be going on here?

16 comments

Comments sorted by top scores.

comment by shminux · 2013-09-04T18:24:24.719Z · LW(p) · GW(p)

Just a guess: you need more working memory and more complicated processing to parse the second case -- try writing a parser for each!

In the first form (which is like RPN) it's push-on-stack all the way through until the first verb to construct a complete determiner). In the second form you have to first push all the nouns on stack, then keep popping the stack up for each verb to match with the corresponding noun to construct the determiner. Except that the last verb corresponding to the first noun is not a part of the determiner, so you have to check that.

TL;DR: RPN expressions and parsers are simpler than the alternatives.

Replies from: iDante, Creutzer, AnthonyC, sketerpot
comment by iDante · 2013-09-04T23:40:34.927Z · LW(p) · GW(p)

The first is head recursive. If it were written in opposite order (in NY, on the street, at the house, the car ...) it would be tail recursive and would be very easy to parse. Once we've found NY we can forget that we're there and reuse our stack space to find the street, etc. I think this is why it's so much easier than the second, which is neither head nor tail recursive and so requires a stack frame for each level.

comment by Creutzer · 2013-09-04T19:34:31.155Z · LW(p) · GW(p)

I don't understand the sense in which you're using the word determiner here. It's not how linguists use the word, because for them, the determiner is just the; it's not constructed. It's also not a sense that's explained in the Wikipedia article you linked to.

Replies from: shminux
comment by shminux · 2013-09-04T20:07:53.343Z · LW(p) · GW(p)

Sorry, wrong term, I meant a modifier... It's been years since I looked into the basics of English grammar...

comment by AnthonyC · 2013-09-04T18:50:14.187Z · LW(p) · GW(p)

Interesting. So in the first case, at each stage we have a completed phrase which my brain can regard as a single unit, whereas reading the second case from left to right I don't have a grammatically correct phrase until the very end. Cool. This seems plausible. I wonder how I should go about testing it.

comment by sketerpot · 2013-09-04T19:10:01.367Z · LW(p) · GW(p)

Formally, I believe the first form can be produced by a regular grammar, but the second form can not. Check out the Chomsky hierarchy for a rundown on the power of each type of grammar.

Replies from: Creutzer
comment by Creutzer · 2013-09-04T19:32:42.965Z · LW(p) · GW(p)

Natural language is full of constructions that can't be produced by a regular grammar, but which nobody has any trouble parsing. So that can hardly be the issue.

comment by gattsuru · 2013-09-04T20:34:29.992Z · LW(p) · GW(p)

Human working memory (aka the magic number 7, plus or minus 4 depending on type) is very chunking-aggressive. Everyone can remember a phone number because it's three numbers, where they might have problems remembering ten separate digits, and similarly very complex sentences can be burned down to three or four fragments that each are only one object in memory.

But chunking doesn't work well when there is ambiguity or where the parts can not yet be brought into a single piece. You can take "the car", then "the car" and "in" and "the driveway", then "the car in the driveway" and "of the house", and so on, until the structure of the first memory object becomes too long to recite internally.

With the second phrase, you have "the mouse" and "the cat" and "the dog" and you're adding a fourth object and it's yet another noun so the English language doesn't let /any/ of these things clearly chunk together. There are some center-embedded sentences that chunk more readily, and some languages that allow more same-part-of-speech chunking, but there's an upper limit to what you can do with human neurology.

((This is somewhat related to the preference for front-loaded active voice in writing and speaking technique.))

Replies from: bogdanb
comment by bogdanb · 2013-09-09T18:01:23.070Z · LW(p) · GW(p)

Everyone can remember a phone number because it's three numbers, where they might have problems remembering ten separate digits

This is slightly irrelevant, but for some reason I can’t figure out at all, pretty much all phone numbers I learned (and, incidentally, the first thirty or so decimals of π) I learned digit-by-digit rather than in groups. The only exception was when I moved to France, I learned my french number by-separate-digits (i.e., five-eight instead of fifty-eight) in my native language but grouped in tens (i.e., by pairs) in French. This isn’t a characteristic of my native language, either, nobody even in my family does this.

Replies from: None
comment by [deleted] · 2013-10-20T21:48:01.899Z · LW(p) · GW(p)

I once had memorized the periodic table to 54 places (Xenon) by name, as a sequence with a few numeral fixed points. This helped me in High-school chemistry. Lost some chunks of the higher parts, but I have intuits about most anything in the periodic table. Some of this is visual memory.

I memorized that as a verbal thing initially, kinda like the alphabet song (which I know a large number of people still sing internally when they need to sort stuff lexicographically). But even the alphabet I have with sucess moved partially to visual memory.

IMO, visual memory is an underused resource to audiotorial thinkers (like myself) and probably vice versa.

comment by Pfft · 2013-09-05T02:16:45.446Z · LW(p) · GW(p)

By some random Googling I found: E. Gibson, Linguistic complexity: Locality of syntactic dependencies, which proposes a model about how much working memory is needed to parse various kinds of syntactic structures, and claims to explain why center-embedding is hard.

It seems to be very popular (1315 citations in Google Scholar), I bet if you look at citing and related papers youl can find lots of discussion.

comment by Creutzer · 2013-09-04T19:30:50.556Z · LW(p) · GW(p)

The term for the second kind of embedding is center embedding, by the way. It's got nothing to do with whether prepositional phrases or tensed phrases are involved; the issue is whether the embedding occurs at the edge of the phrase or, as the name suggests, in the center. Unfortunately, I'm not familiar with the literature on that, so I can't tell you what explanations have been proposed for why it's harder to process. But this is the term you'll need to Google for.

comment by TsviBT · 2013-09-04T18:37:33.977Z · LW(p) · GW(p)

Cf. Chad Gadya.

comment by Gurkenglas · 2013-09-04T18:24:14.467Z · LW(p) · GW(p)

Maybe it's just that people don't expect the second structure or aren't used to it? I predict that exposing someone to 2. sufficiently will end the confusion.

That's all. Surely, (...) the normal explanation is always worth considering?"

Replies from: AnthonyC
comment by AnthonyC · 2013-09-04T18:44:53.974Z · LW(p) · GW(p)

Interesting. I wonder, are there natural languages - or communities - where the second structure is commonly used? If not, that may be evidence favoring a natural inclination towards (or ability to use) the first structure

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-09-07T19:39:32.347Z · LW(p) · GW(p)

I wonder, are there natural languages - or communities - where the second structure is commonly used?

This is how German language seems to me, but I don't speak German well, so if I am correct, could someone please provide an example of a similarly structured sentence, with a word-for-word translation to English?