Structure, creativity, and novelty

post by TsviBT · 2023-01-29T14:30:19.459Z · LW · GW · 4 comments

Contents

  Elements and structure
  Novelty, creativity
    Acquiring elements
    Pierce's abduction
  Measuring structuredness
    Examples part 1: Compressibility (prediction, surprise)
    Examples part 2: Definability (computational strength, quantifier complexity, expressive strength)
    Examples part 3: Provability (logical strength)
    Remarks on examples of structure
  Synopsis
None
4 comments

[Metadata: crossposted from https://tsvibt.blogspot.com/2022/08/structure-creativity-and-novelty.html. First completed 26 June 2022. I'm likely to not respond to comments promptly.]

A high-level confusion that I have that seems to be on the way towards understanding alignment, is the relationship between values and understanding. This essay gestures at the idea of structure in general (mainly by listing examples).

Why do we want AGI at all?

We want AGI in order to understand stuff that we haven't yet understood.

(This is not a trivial claim. It might be false. It could be that to secure the future of humane existence, something other than understanding is necessary or sufficient; e.g. it's conceivable that solving some large combinatorial problem, akin to playing Go well or designing a protein by raw search with an explicit criterion, would end the acute risk period. But I don't know how to point at such a thing--plans I know how to point at seem to centrally involve understanding that we don't already have.)

Elements and structure

Understanding implies some kind of structure. (This is a trivial claim, or a definition: structure is what a mind is or participates in, when it understands.) Structure is made of elements. "Structure" is the mass noun of, or continuous substance version of, "element". The point of the word "element" is just to abbreviate "any of that pattern-y, structure-y stuff, in a mind or in the world in general".

Elements. An element (of a mind) is anything that combines to constitute the mind, at any level of organization or description.

Novelty, creativity

Acquiring elements

A mind's internal process of creating elements is creativity. The result of creativity is novelty: elements that are new to the mind.

Novelty can be encountered or acquired in ways other than creativity. Other minds are a source of novelty, encountered and potentially acquired but not created. Learning is a clear example of acquiring novelty, and most learning is only somewhat creative, being heavily alloyed with copying from another mind. Learning to do something on your own is creativity. Creativity is the "creative edge" or "creative froth" of thought; search, trying things out, program search, combinatorial thinking, tweaking ideas. Evolution and automated proof search are creative non-minds: they creative novel structures, without having the context in which those structures are fully themselves. An example of encountering novelty without acquiring it is if a superintelligent AGI kills you by understanding stuff that you don't understand, or if you see a car with an internal combustion engine go fast without knowing about PV=nRT and gears (even if you've already seen cars before; novelty is perennially novel until it's acquired).

Elements can be acquired by:

Pierce's abduction

Charles Sanders Pierce described three kinds of inference:

All three of these kinds of inference involve novelty. They are interweaved with each other. For example:

But overall, abduction is the most creative form of inference: abductive reasoning always involves self-generated novelty, and if all the elements generated by abduction fail to be novel to the reasoner, then it was a failed abduction.

We could add non-linguistic elements to Pierce's scheme of inference:

Measuring structuredness

This section lists some theories that sift out the essence of some kinds of structure and compare structure with structure. This isn't trying to be comprehensive or to demarcate anything; it's a collection intended to gesture at what structure is by describing some of the gross contours of the universe of structure. For some coordinates of structure, see this list of directions in the space of concepts.

Examples part 1: Compressibility (prediction, surprise)

Theme: structuredness correlates with locating or being a small target in a large space.

Examples part 2: Definability (computational strength, quantifier complexity, expressive strength)

Theme: structuredness correlates with being able to describe / point at / compute / subsume many things.

[Entering Higher Recursion Theory Zone, which I don't understand so well]

Examples part 3: Provability (logical strength)

Theme: structuredness correlates with deductively implying many things (while being consistent).

Remarks on examples of structure

The examples above are somewhat arranged in order of complexity of the structure they describe. Complexity is correlated with "depth", but is not the same; simple things are often "deep", and things that are complex in some sense can be "shallow".

The above list is heavily biased towards things that I'm aware of, things that have some interesting developed theory, and things that fit into hierarchies and uniform comparisons. What other measurements or notions of structure-in-general are there? There are notions of simulation, e.g. "bisimulation", but I'm not aware of very interesting general results there. There's the informal notion of "deep" mathematics, or "deep" insights in general, which have the flavor of retrodiction and the flavor of being generally useful and implying many other things. See Penelope Maddy's work.

It's not necessarily interesting to try specifically to "measure structure", but speaking vaguely, I would like to know how different kinds or dimensions of structure relate. E.g., when someone learns a skill, in what senses are they accessing / using / participating in / creating propositions? (More concretely, what other skills must they be enabling themselves to also learn easily?) Algorithmic complexity theory and computability/definability theory touch on the complexity of "concepts" in some sense, but there's a lot left to ask about; when / how / in what senses does a mind come to understand something, and how can you tell, and what does that imply about what the mind can, can't, will, or won't do?

Synopsis

To interface with a mind, we have to understand what it understands. Understanding is some kind of structure. Minds are made of elements. Structure is elements. Structure that's new to a mind is novelty. Creativity is the process of generating novelty. Structuredness correlates with compression, expression, and impression implication.

4 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2023-02-02T04:31:17.163Z · LW(p) · GW(p)

Alright, fair warning, this is an out there kind of comment. But I think there's some kind of there there, so I'll make it anyway.

Although I don't have much of anything new to say about it lately, I spent several years really diving into developmental psychology and my take on most of it is that its an attempt to map changes in the order of complexity of the structure thoughts can take on. I view the stages of human psychological development as building up the mental infrastructure to be able to hold up to three levels of fully-formed structure (yes, this is kind of handwavy about what a fully-formed structure is) in your mind simultaneously without effort (i.e. your System 1 can do this). My most recent post exploring this idea in detail is here [LW · GW].

This fact about how humans think and develop seems an important puzzle piece in understanding how, among other things, we address your questions around understanding what other minds understand.

For example, as people move through different phases of psychological development, one of the key skills they gain is better cognitive empathy. I think this comes from being able to hold more complex structures in their mind and thus be able to model other minds more richly. An interesting question I don't know the answer to is if you get more cognitive empathy past the end of where human psychological development seems to stop. LIke, if an AI could hold 4 or 5 levels simultaneously instead of just 3, would they understand more than us, or just be faster. I might compare it to a stack based computer. A 3-register stack is sufficient to run arbitrary computations, but if you've ever used an RPN calculator you know that having 4 or more registers sure makes life easier even if you know you could always do it with just 3.

I don't know that I really have a lot of answers here, but hopefully these are somewhat useful puzzle pieces you can work on fitting together with other things you're looking at.

Replies from: TsviBT
comment by TsviBT · 2023-02-05T16:24:51.522Z · LW(p) · GW(p)

An interesting question I don't know the answer to is if you get more cognitive empathy past the end of where human psychological development seems to stop.

Why isn't the answer obviously "yes"? What would it look like for this not to be the case? (I'm generally somewhat skeptical of descriptions like "just faster" if the faster is like multiple orders of magnitude and sure seems to result from new ideas rather than just a bigger computer.)

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-02-05T17:01:14.759Z · LW(p) · GW(p)

So there's different notions of more here.

There's more in the sense I'm thinking in that it's not clear additional levels of abstraction enable deeper understanding given enough time. If 3 really is all the more levels you need because that's how many it takes to think about any number of levels of depth (again by swapping out levels in your "abstraction registers"), additional levels end up being in the same category.

And then there's more like doing things faster which makes things cheaper. I'm more skeptical of scaling than you are perhaps. I do agree that many things become cheap at scale that are too expensive to do otherwise, and that does produce a real difference.

I'm doubtful in my comment of the former kind of more. The latter type seems quite likely.

comment by JBlack · 2023-01-30T23:51:28.025Z · LW(p) · GW(p)

I think the claim at the start doesn't nearly cover the reasons we want AGI. As I see it, the main reason we want AGI is that there's a lot of stuff that we can already do, but we want it done faster and more cheaply without sacrificing much flexibility or reliability.

The trouble is that we've picked most of the low-hanging fruit and many of those tasks that remain need something approximating human intelligence. They sometimes also need ability to work with human social and legal contexts, and to be fine-tuned without an (expensive!) army of programmers specifying every rule.

There's some possibility that creating an entity that thinks like a human may tie into our drive to reproduce.

It's also just an extremely interesting problem from a technical point of view.

But sure, structure and understanding do appear to be major factors in intelligence of a human-like nature and it's interesting to try to classify and define such things.