How Should We Measure Intelligence Models: Why Use Frequency of Elemental Information Operations

post by hwj20 · 2024-10-24T16:54:19.096Z · LW · GW · 0 comments

Contents

  Introduction
    Intelligence and Wisdom
    Intelligent Models and the Wisdom of Natural Selection
  How Frequency of Ops Explain Intelligence Issues
    Elemental Operations
    Solving the Brain’s Black Box: Can We Migrate to Computers?
  True Elemental Operations? Landauer’s Principle
    Landauer's Principle and Minimum Energy Dissipation
    Limitation
  Conclusion
  References
None
No comments

Introduction

When I’m in the lab staring at the buzzing GPU servers, I often wonder: am I smarter, or are these machines smarter? At least when it comes to rendering tasks, it’s clear that I can’t compete with these machines. After all, whether the human eye has millions or billions of pixels[1], I surely couldn’t outperform eight A100 GPUs working together. Similarly, if I were tasked with memorizing billions of faces, it would be completely impossible.

 

Intelligence and Wisdom


Intelligence is related to the frequency of information processing rather than the outcome of that processing.


We can define intelligence by the rate of information operations per unit of time, also known as frequency of elemental information operations. For simplicity, let’s call these information operations “ops.” Elemental operations mean ops cannot be subdivided into smaller computational units.

Let’s imagine we discover a new alien microorganism with a very simple nervous system, just a light receptor and a flagellum that controls forward or backward movement. When it sees blue (assuming blue represents food), it sends a forward signal to swim toward the food; when it sees red (assuming red represents a predator), it sends a backward signal.


In this simple example, we've constructed a basic intelligent system that processes environmental signals and responds to them.

A drawing of a person with arms and legs

Description automatically generated


Now, imagine we find another alien microorganism with a similar structure, but it sends a backward signal when it sees food and a forward signal when it sees predators. How should we evaluate the intelligence of these two organisms?


In modern AI technology, if our model has a high error rate in a binary classification task, we can simply invert the output to achieve a high accuracy rate (ignoring the cost of inversion for now). Thus, these two alien organisms actually possess the same level of intelligence because they process information at the same rate per unit of time—despite their vastly different outcomes.


However, the latter would be eliminated by natural selection. The issue is not that it “lost” in terms of intelligence, but rather that it lacks "wisdom." It’s not that the latter is less intelligent, but that it lacks the wisdom to adapt to natural selection.

 

Intelligent Models and the Wisdom of Natural Selection

When building AI models, we don’t always choose the most complex models; rather, we focus on selecting models that best meet the requirements. For example, when designing a traffic light recognition model, we prioritize models with the highest accuracy and discard those that perform poorly. This selection process makes us, in an artificial environment, play a role similar to natural selection.


From this perspective, what we’re selecting isn’t the “smartest” model, but the “wisest” model, i.e., the one that best meets our predefined criteria (such as accuracy). This is essentially a pursuit of “artificial wisdom.”

 

How Frequency of Ops Explain Intelligence Issues

We have discussed the definition of intelligence. Next, let’s talk about some “black box” issues in the field of intelligence. One of the most intriguing is how to measure human brain intelligence.


Many people wonder whether they have a “high IQ” and whether individuals who struggle with math but excel in music can still be considered intelligent. In fact, existing measurement methods, such as IQ tests, evaluate performance on specific tasks. This involves both “wisdom” (e.g., learning tricks to lower cognitive load or memorizing the answer) and “intelligence” (i.e., brain processing speed).


In the next chapter, we will attempt to quantify human brain models. But first, it’s worth considering the brain's origins.


Some have compared the brain’s power to a GPU's power[2]. Although the brain’s information transmission rate is much slower than a GPU’s, the brain excels at powerful parallel processing. The brain is more efficient, while GPUs are faster. Regrettably, we are not as “smart” as a GPU in terms of processing speed.


This relationship is similar to the comparison between birds and airplanes. Birds evolved for energy efficiency in natural selection (which is why we don’t see "jet-powered birds"). Likewise, Earth’s energy cycle primarily relies on solar energy, converted into plant energy and then into other forms of energy. The higher the energy density, the more energy an organism consumes. For example, high-energy-consuming animals like tigers typically occupy larger territories because local energy sources are insufficient to support multiple predators. This explains why plants haven’t evolved higher intelligence or the ability to move—they lack the energy required. Perhaps one day we’ll discover thinking plants on more energy-rich planets.

 

Elemental Operations


At this point, you might associate elemental operations with the brain’s floating-point operations (FLOPS). Let’s return to the example of the alien microorganism. As they evolve, their nervous systems become more complex. 

Let’s assume their escape signal strength is controlled by the equation:  ​, where  is a constant and x is the distance from the predator. When they are very close to the predator, their bodies are in extreme danger, and their nervous systems experience enormous pressure (they’re terrified.).


But what if their velocity was controlled by ? How do we determine whether this organism is "smarter" than the previous one controlled by ? Is their processing speed faster?

This seems tricky, but in modern computing, we can achieve exact mathematical calculations, while maintaining errors within an exceedingly small margin. Similarly, alien organisms’ nervous systems also have errors, which may be larger than those of computers, but these errors are due to random disturbances, not intelligent operations. Thus, these errors are irrelevant to the “intelligence” we seek.


Assuming that two input bits and one output operation define elemental operations, we can derive the following truth table:

Table 1 Logical Operations on Two Input Bits

Operation\Input bits0 00 11 01 1

Zero

0

0

0

0

AND

0

0

0

1

Value as First Bit

0

0

1

1

 

 

 

 

Set 1

1

1

1

1

From this truth table, we can deduce 16() types of elemental operations, including common ones like AND and OR gates. In fact, modern computers operate based on these logic gate structures (refer to analog circuits).


Of course, in practice, we can’t create “perfect” elemental operation gates. Even a simple AND gate will have some voltage fluctuation in its output, but these fluctuations are typically within an acceptable error range.


In summary, any intelligent system operates within a tolerable error range, including our brains. Our intelligent structure functions within this margin of error.

 

Solving the Brain’s Black Box: Can We Migrate to Computers?


Suppose we could represent the structure of the human brain’s neurons with a function, such as , where  is the elemental operation model of neurons and  is the error.


In theory, our elemental operations can simulate any  within the range of the human brain because it does not involve complex mathematical functions. As long as our elemental operations are consistent at the mechanical and biological levels, along with the right initial conditions (such as the correct number of chemicals), we could consider ourselves successfully replicated and migrated onto a computer.


However, this view has a flaw: our errors are different. Earlier, we mentioned that errors are acceptable because we are constantly dealing with them in real life. This is true, but we cannot guarantee that future errors will be identical. Human behavior and thought processes may change due to these subtle differences in error. Migrating to a computer might not affect our memory or personality, but subtle differences could arise in future behavior due to varying errors.

Of course, if we could design sufficiently precise structures and keep errors within acceptable limits, we could possibly succeed in migrating human intelligence to computers. This might be a feasible path to achieving digital consciousness.

 

True Elemental Operations? Landauer’s Principle

Landauer's Principle and Minimum Energy Dissipation

According to Landauer's principle, the minimum energy dissipation for erasing one bit of information is:

Where:

This formula provides the minimum energy requirement for erasing one bit of information at a given temperature. Therefore, the limit of minimum energy dissipation is not determined by specific logic gates (such as AND or NOT gates), but by the physical processes involved, especially during irreversible operations (such as information erasure).

 

Limitation

Given my limited knowledge of physics, I’ve only proposed a possible composition of elemental operations through the truth table 1. There may be other operations that better adhere to the laws of physics, and I encourage collective brainstorming.

Conclusion

 

References

  1. What Is the Resolution of the Human Eye? (shotkit.com)
  2. Brain Efficiency: Much More than You Wanted to Know — LessWrong [LW(p) · GW(p)]

0 comments

Comments sorted by top scores.