Increasing the Span of the Set of Ideas

post by Jeffrey Heninger (jeffrey-heninger) · 2024-09-13T15:52:39.132Z · LW · GW · 1 comments

Contents

  Introduction
  The Span of the Set of Ideas
  Two Claims
    Humans
      Claim 1: Humans, or biological organisms more generally, can increase the span of the set of ideas.
    Neural Nets
      Claim 2: Neural nets do not increase the span of the set of ideas.
    How Do Brains Work?
  Searching Large Sets
    New Strategies in Chess
    The Invention of Language
  Conclusions
None
1 comment

Epistemic Status: I wrote this back in January, and have been uncertain about whether to publish it. I expect that most people who read this here will be unconvinced. But I still want to express my intuition.

In the last month, these ideas have come up in conversation again. Then, Toby Ord published a paper on arxiv a few days ago with substantial overlap with ideas presented here, so it seems like it is time to post it. I considered rewriting this post to better respond to & engage with that paper, but decided to leave it as is.

Introduction

I have previously stated that I am skeptical that AGI is possible on a (classical) computer.[1] It seems virtuous to try to turn this into specific predictions about what things can and cannot be done using neural nets, the current leading model of AI development.

I am aware that similar sorts of predictions by other people have failed. Nevertheless, I will make predictions.

Here are some things that I would be surprised if AI were able to do:

The key thing unifying all of these examples is an invention which is not merely a recombination of the things the AI was trained on. If you have additional examples that you think reflect what I am saying in this post, suggest them in the comments, and I might add them to the list.

The Span of the Set of Ideas

Consider the set of all thoughts anyone has ever had. Now consider all possible recombinations of these ideas or parts of these ideas. I will call this the span of the set of ideas. The analogy is to vector spaces: the span of a set of vectors is the set of all linear combinations of those vectors. A set of ideas might not be a vector space and recombining ideas might not be vector addition (and scalar multiplication), but I think the analogy works well enough.

It also might be worthwhile to think of the span of various subsets of ideas. One subset could be the set of all thoughts expressible using current English words. Another could be all images anyone has seen or imagined. A third could be all scientific theories or hypotheses anyone has considered. The span of each of these would consist of all possible recombinations of the ideas in the set.

Two Claims

I would like to make two claims about the span of the set of ideas: (1) Human creativity does not merely include the ability to recombine ideas; it also includes the ability to add new ideas which increase the span of the set of ideas, and (2) the AI models known as neural nets can be masters of recombination, but do not increase the span of the set of ideas.

Humans

Claim 1: Humans, or biological organisms more generally, can increase the span of the set of ideas.

At some point, there were no thoughts in the world. Nothing was experiencing or reasoning. The span of the set of ideas at this time was the empty set. Now, I can be extremely confident that there is at least one being in the world with thoughts - and there seems to be many more. At some point, some biological organism thought the first thoughts. This act guarantees that biological organisms have had the ability to expand the span of the set of ideas.[5]

Humans think about wildly more things than chimpanzees - or amphibians. The things we think about seem like more than recombinations of the thoughts of our recent or more distant ancestors. At some point in between, the span of the set of ideas had to increase.

Sometimes, people create new things through great acts of genius. On closer inspection, some of these are revealed to be recombinations of prior things. But some of them do not seem to be recombinations. The list at the top of this post are some things that humans have done that I think increased the span of the set of ideas.

I expect that this ability is not only used in moments of great genius, and that there exist less impressive versions of it that are used more commonly. The dramatic examples would then be people who are unusually good at a common skill, rather than people who have access to an usual skill.

Note that these arguments do not depend on a model of the brain. I do not have a detailed understanding of how the brain works. I don’t think that anyone else does either, although some people undoubtedly know more about it than I do. All that is necessary for these arguments is to show that this is a thing that has happened.

Neural Nets

This section argues about neural nets in particular, rather than arbitrary algorithms that can be implemented on a classical computer. The generalization is mostly based on intuition.

Claim 2: Neural nets do not increase the span of the set of ideas.

The architecture of a neural net strongly suggests recombination. Each layer combines inputs from the previous layer and passes the combination through a simple function. This process is repeated over many nodes in many layers, with the inputs being iteratively combined and recombined, resulting in a very general high dimensional function from inputs to outputs.

The specific things that are the inputs that the neural net, and which the neural net can recombine, are:

I claim that the output of a neural net is always a recombination of these things. Neural nets seem like they can be much better than humans at recombination, so the result can be sophisticated and impressive, but the answer will never be outside of the span of the set of the inputs.

There is a more narrow technical version of this blog post’s claim, which includes:

Multimodal models exist, but they have to be designed as such. 

How Do Brains Work?

If the neural nets from computer science do not increase the span of the set of ideas, by what mechanism do the networks of actual neurons in biological organisms achieve this?

I do not know. I do not think that anyone else knows either.

While neural nets were inspired by, and named after, networks of biological neurons, their structure is quite different. Progress in neural nets have largely taken them farther from the original analogy.[6] Two important ways in which they are different include: (1) neurons, or synapses, are not simple things, but instead have complicated internal dynamics that can amplify smaller scale uncertainties in nontrivial ways,[7] and (2) networks of neurons are not arranged in discrete layers with an unambiguous ordering of inputs and outputs, but instead connect in many complicated loops.[8] Both of these make networks of neurons not-a-function. Given the same inputs (including the seed of the random number generator if there is one), a neural net will produce the same output. A network of biological neurons generally will not.

Searching Large Sets

New Strategies in Chess

When I make a claim about something that I do not think that neural nets can do, I should be careful that this isn’t something that neural nets have already done. We already have built superhuman capabilities in some fields (mostly games), so we can check if this ability has appeared there.

Superhuman chess-playing AI does not just perform existing strategies more precisely than humans do - it also seems to have good strategic ‘understanding,’ and will sometimes play moves that humans would not have previously considered.

This is hard to operationalize, but does seem true. One potential example is advancing your h-pawn in the middlegame: powerful chess AI did this move more commonly than human grandmasters, who later learned how to use this strategy.[9]

I claim that this is an example of a recombination of chess moves, rather than a novel chess strategy.

This example demonstrates that there can be multiple ways of thinking about a problem. We can think of the span of the set of known chess strategies, or we can think of the span of the set of chess moves. 

The span of the set of chess moves is obviously much larger than the span of the set of known chess strategies.

These two ways of thinking about chess suggest different ways to play: ‘tactical play,’ which involves calculating sequences of moves, and ‘positional play,’ which involves focusing on the broader strategic landscape. Both humans and chess-playing neural nets do both.

My claim is that, if a chess-playing neural net were to use exclusively positional play, it would only use strategies that are recombinations of strategies in its training set. The novel strategies that we do see chess-playing neural nets use come from tactical play, where the neural net can calculate more moves faster than any human.[10] The span of the set of chess moves (or not obviously bad chess moves) is small enough to search for moves that will be good directly, even if they are not based on any existing strategy.

When humans are creative when playing chess, they likely sometimes do use the ability to increase the span of the set of ideas to come up with new strategies. The result might even be similar to the brilliant moves found by the neural nets. But, I claim, the way it was found would be different: in one case, by using this kind of creativity on chess strategies, and in the other case, by searching a large set of candidate sequences of moves.

The Invention of Language

Maybe allowing for the distinction between the span of the set of ideas at multiple levels makes what I’m arguing much less interesting. If we could lift anything that requires this kind of creativity to a search over a larger set, then not having this kind of creativity would not be much of a restriction.

I do not think that searching over a larger set will always work. Identifying the relevant larger set could be prohibitively difficult, and maybe an appropriate large set does not exist. Some sets are so large that it is prohibitively difficult to search over.[11]

While some things could be done by searching over large sets, I do not think that this is what human creativity is doing.

Several of the predictions at the top of this post involve inventing language. There are larger sets to consider searching over: the set of all utterable sounds for spoken language[12] and the set of all drawable shapes for writing. These seem like much larger sets than the set of all chess moves, and language seems much harder to find.

There is a story that you tell about how ancient hominids invented language by searching over the set of utterable sounds.

Earlier primates used various vocalizations to communicate. Having a larger number of distinct utterances, or ‘words,’ enables better communication, which makes individuals in these groups more fit. There would thus be a trend towards larger vocabularies. Having a large vocabulary is not sufficient for language: you also need grammar. This might have developed in a similar way, gradually over millions of years of group selection for increasingly complex associations of words, until eventually, human grammar developed, with recursive nesting allowing for arbitrarily complex combinations of words.

I do not know if this story is true. Neither does anyone else.

I can confidently say that modern humans can invent language without going through this whole process. Why?

Nicaraguan Sign Language.

Wikipedia describes the history of Nicaraguan Sign Language as follows:[13]

Prior to the late 1970s, Nicaragua did not have a sign language community. Deaf people were scattered across the country and did not interact with each other much. They typically had a few hundred signs or gestures to communicate with their family and friends, but did not have language.

In 1977, the Nicaraguan government decided to make a school for the deaf. They had 40 children enrolled in 1977, 100 children enrolled in 1980, and 400 children enrolled in 1983. Nicaragua did not have any teachers who knew sign language, so they instead tried to teach lipreading - which failed. By 1986, the students and teachers were still linguistically distinct. The school invited an American linguist who knew sign language to come help. She found that the students had invented a new language ! It has some familiar things, like verb conjugation, but also some grammatical constructions which are very unique.

A few hundred children who did not know language were able to invent language in less than a decade.[14] This is too small an amount of person hours to search the set of gestures to find grammar. The ability to dramatically expand the set of words and the complexity of grammar is an ability that groups of humans have.

Modern humans’ ability to invent language seems to be an instance of the kind of creativity that allows you to increase the span of the set of ideas.

Conclusions

This main points of my argument are:

  1. Neural nets can be extremely good at recombining ideas which are present in their inputs and can search large, but not extremely large sets, for new strategies.
  2. Humans have an additional ability which is a kind of creativity that allows us to move outside of the span of the set of our inputs. 

This argument does not feel particularly surprising to me. Neural nets do not process information the same way as networks of biological neurons, so it would not be surprising if they have different skills. Some other people also seem to have the intuition that creativity would be extremely hard (impossible?) to implement with an AI.[15]

Some people might read this or similar arguments to imply that AGI is impossible, and so we do not need to worry about dangerous AI. This seems pretty wrong.

I am skeptical that AGI can be achieved on a (classical) computer. This argument does underlie part of my intuition, but I do not think that this argument by itself would be fully convincing.

This argument also informs us on what is needed for AGI. A machine that can perfectly replicate anything anyone has ever done, but do nothing more, is not fully general. We also expect humans to do new things in the future.

Something does not need to have this kind of creativity to be dangerous. I would rather not live in a world that contains optimized combinations of capable and terrible people from the past. Even if AI can not have this kind of creativity, that does not mean that there is nothing to worry about from powerful or dangerous AI.

 

Thank you to Rick Korzekwa, Harlan Stewart, Dávid Matolcsi, and William Brewer for helpful conversations on this topic.

 

  1. ^

    Jeffrey Heninger. My Current Thoughts on the AI Strategic Landscape. AI Impacts Blog. (2023) https://blog.aiimpacts.org/p/my-current-thoughts-on-the-ai-strategic.

    I am using the Annie Oakley definition of AGI: “Anything you can do, I can do better.”

  2. ^

    Let's say by 2050 to make the prediction precise.

  3. ^

    Most ‘single language’ corpuses contain samples of other languages. These would have to be purged from the training set in order to test this.

  4. ^

    The inverse, an image model trained exclusively on cubist art inventing naturalism, would be even more surprising. I’m not sure where it would get the information about what the natural world looks like.

  5. ^

    This argument could be countered by panpsychism or by claiming a divine origin of ideas. Since neither of these positions seem common in this community, I will not address them here.

  6. ^

    For example, computer scientists initially used sigmoid-like functions to approximate the step function found in biological neurons. More recently, it was found that rectified linear unit (ReLU) activation functions work better for most tasks.

  7. ^

    Jeffrey Heninger and Aysja Johnson. Chaos in Humans. AI Impacts Wiki. (2023) https://wiki.aiimpacts.org/uncategorized/ai_safety_arguments_affected_by_chaos/chaos_in_humans.

  8. ^

    Recurrent neural nets have this second property to a limited extent.

  9. ^

    Joshua Doknjas. How the AI Revolution Impacted Chess. ChessBase. (2022) https://en.chessbase.com/post/how-the-ai-revolution-impacted-chess-1-2.

  10. ^

    I do not know if this claim is actually testable. It might not be possible to draw a sharp distinction between tactical and positional play.

  11. ^

    The size of the space something can search over depends on the capabilities of the searcher. But there are also some sets that are so large that they would require more information than is available in the universe to search over. Combinatorial explosions scale really fast.

  12. ^

    This implicitly assumes that language was invented on a faster timescale than vocal cords evolved. You can relax that assumption and make a similar argument, although it would be harder to state clearly.

  13. ^

    Nicaraguan Sign Language. Wikipedia. (Accessed January 29, 2024) https://en.wikipedia.org/wiki/Nicaraguan_Sign_Language.

    Most of the sources cited in the Wikipedia article are behind paywalls, so I did not independently verify that is what they are saying.

  14. ^

    Feral children, who grow up with little-to-no human contact, do not invent language and often have difficulty learning language as adults. The group interactions seem to be important.

  15. ^

    A quick search for “AI Creativity” yields some examples of people whose intuition seems to be similar to my own, e.g.:

    There are also some articles with very different intuition.

1 comments

Comments sorted by top scores.

comment by Jeffrey Heninger (jeffrey-heninger) · 2024-09-13T16:13:02.087Z · LW(p) · GW(p)

A few more thoughts on Ord's paper:

Despite the similarities, I think that there is some difference between Ord's notion of hyperbolation and what I'm describing here. In most of his examples, the extra dimension is given. In the examples I'm thinking of, what the extra dimension ought to be is not known beforehand.

There is a situation in which hyperbolation is rigorously defined: analytic continuation. This takes smooth functions defined on the real axis and extends them into the complex plane. The first two examples Ord gives in his paper are examples of analytic continuation, so his intuition that these are the simplest hyperbolations is correct. 

More generally, solving a PDE from boundary conditions could be considered to be a kind of hyperbolation, although the result can be quite different depending on which PDE you're solving. This feels like substantially less of a new ability than e.g. inventing language.