Isnasene's Shortform

post by Isnasene · 2019-12-21T17:12:32.834Z · LW · GW · 1 comments

Contents

1 comment

1 comments

Comments sorted by top scores.

comment by Isnasene · 2019-12-21T17:12:33.013Z · LW(p) · GW(p)

I've been thinking for a while about the importance of language processing for general intelligence -- mainly because I think it might be one of the easiest ways to achieve it. Here are some vague intuitions that make me think this:

  • Language processing is time-independent which affords long-term planning. For instance, the statement "I am going to eat food tomorrow and the next day" feels about as easy to fill-in as "I am going to eat for this year and next year." Thus, someone with language processing can generate plans on arbitrary time-scales so long as they've learned that part of language.
  • Language processing is a meta-level two-way map between object level associations which accelerates learning speed: Natural language processing takes a word and maps it to a fuzzy association of concepts and also maps that fuzzy association of concepts back to a word. This means that if you learn a new word:concept map and your natural language processing unit can put that word in the context of other words, you can also in principle join your fuzzy map of the object-level concept the word is referencing with all the other object-level fuzzy concepts you already have.
    • This doesn't obviously happen naturally in humans (ie we can learn something verbally but not internalize it) but humans can also internalize things too and language helps do that faster
  • Language can be implemented as a sparse map between object-level associations which makes it faster and more memory conservative than learning everything at the object-level. You don't actually need neural fuzzy-association knowledge about the complex concepts your words are referencing, you just need to know the fuzzy-association knowledge between actionable words. Language can be used for all planning and the object-level associations are only required when associated with words that imply action.
  • Language is communicable while raw observations are not -- which affords a kind of one-shot learning. Instead of finding out about a fatal trap through experience (which would be pretty risky), you can just read about the fatal trap, read about what people do in the context of "fatal traps" and then map those actions to your object-level understanding of them. This means you never have to learn on an object-level fuzzy-concept level what fatal traps really are
  • The question of which words map to which fuzzy concepts often gets decided through memetic competition. Words with easier to communicate concepts spread faster because more people communicate them which suggests that word:concept maps are partially optimized for human-learnability
  • I do most of my planning using language processing so, from an experiential point-of-view, language seems obviously useful for general intelligence

I suppose one implication of this is that language is useful as part of whatever thinking-architecture a general intelligence winds up picking up and the (internal) language picked up by a general intelligence will probably be one that already exists. That is to say, a general intelligence might literally think in English. There are still dangers here though because even if the AI thinks in English, it probably won't "do English" in the same way that humans do it.