Harmonic Wave Resonance

post by lsusr · 2020-05-31T04:10:58.143Z · score: 7 (4 votes) · LW · GW · 2 comments

Contents

  Unstable Minds
None
2 comments

In a previous essay [LW · GW], I illustrated how modern machine learning architectures require too much training data to teach themselves high-level concepts. They might be able to learn concepts one or even two rungs up the ladder of abstraction. Then they hit a computational wall[1].

To get around this bottleneck, an intelligence must have the capability to create an abstraction layer and then build another abstraction layer on top of the previous abstraction layer, inductively. This is a fractal[2]. Thus we come to my First Law of Artificial Intelligence.

Any algorithm that is not organized fractally will eventually hit a computational wall, and vice versa.

―Lsusr's First Law of Artificial Intelligence

Fractally nested intelligences construct a graph of abstractions from specific to general. Changes to specific small-scale abstractions produce large changes in the behavior of general large-scale abstractions. General abstractions are emergent from specific abstractions.

I believe the human brain embeds its fractal in Connectome-Specific Harmonic Waves [LW · GW]. It follows that bottom-up (evidence-heavy) and top-down (prior-heavy) processing are both governed by resonance equation where is to be found experimentally. The resonance equation provides a mechanism for bottom-up and top-down processing simultaneously.

Top-down processing is a kind of amplification. For example, after you realize that black capped chickadees (high-level abstraction) are a species of bird your eyes will pay closer attention (amplification) of the bird's characteristic black markings on its head (low-level abstraction).

image of black-capped chickadee

Bottom-up processing is how an intelligence creates its own semantic layer.

Unstable Minds

A self-amplifying hierarchical system is chaotic; small changes to training data can produce large changes in self-organization. Resonance in particular are so chaotic that the double pendulum is a classroom example of chaotic motion.

The First Law implies that an AI's updates must percolate both up and down along its hierarchical organization thereby resulting in chaotic self-modification.

No algorithm without the freedom to self-alter its own error function can operate unsupervised on small data.

Lsusr's Second Law of Artificial Intelligence [? · GW]

The Second Law implies than an AI has the freedom to modify its own morality.

The First Law implies that an AI will modify itself chaotically.

An AI cannot be considered "powerful" unless it is organized fractally. An AI cannot be considered "general" unless it can operate unsupervised on small data. Together, the First and Second Law imply that an AGI must have the capability to make large changes to its own morality in response to small changes in its input. In other words, the morality of an AGI is inherently chaotic.

A chaotically-moral AGI is the opposite of an reliably-aligned system. This could throw a wrench into the creation of a reliably aligned AI.


  1. In a hybrid machine learning system, human programmers often create layers of abstraction for the system. These ad hoc systems do not suffer from the same computational wall. Nor do they constitute artificial general intelligence. ↩︎

  2. A fractal software architecture does not depend on fractal underlying hardware. ↩︎

2 comments

Comments sorted by top scores.

comment by MoritzG · 2020-06-01T18:20:19.918Z · score: 1 (1 votes) · LW(p) · GW(p)

"freedom to self-alter its own error function"

How? By changing the function alone or by changing the input to that function?

comment by lsusr · 2020-06-01T21:46:27.669Z · score: 2 (1 votes) · LW(p) · GW(p)

By tuning the function's parameters that define success. In your words, "[b]y changing the function alone".