Half-baked idea: a straightforward method for learning environmental goals?

post by Q Home · 2025-02-04T06:56:31.813Z · LW · GW · 0 comments

Contents

  Explanation 1
    One naive solution
    One philosophical argument
    One toy example
  Explanation 2
    Formalization
None
No comments

Epistemic status: I want to propose a method of learning environmental goals (a super big, super important subproblem in Alignment). It's informal, so has a lot of gaps. I worry I missed something obvious, rendering my argument completely meaningless. I asked LessWrong feedback team, but they couldn't get someone knowledgeable enough to take a look. 

Can you tell me the biggest conceptual problems of my method? Can you tell me if agent foundations [? · GW] researchers are aware of this method or not?

If you're not familiar with the problem, here's the context: Environmental goals; identifying causal goal concepts from sensory data; ontology identification problem; Pointers Problem [LW · GW]; Eliciting Latent Knowledge [? · GW].

Explanation 1

One naive solution

Imagine we have a room full of animals. AI sees the room through a camera. How can AI learn to care about the real animals in the room rather than their images on the camera?

Assumption 1. Let's assume AI models the world as a bunch of objects interacting in space and time. I don't know how critical or problematic this assumption is.

Idea 1. Animals in the video are objects with certain properties (they move continuously, they move with certain relative speeds, they have certain sizes, etc). Let's make the AI search for the best world-model which contains objects with similar properties (P properties).

Problem 1. Ideally, AI will find clouds of atoms which move similarly to the animals on the video. However, AI might just find a world-model (X) which contains the screen of the camera. So it'll end up caring about "movement" of the pixels on the screen. Fail.

Observation 1. Our world contains many objects with P properties which don't show up on the camera. So, X is not the best world-model containing the biggest number of objects with P properties.

Idea 2. Let's make the AI search for the best world-model containing the biggest number of objects with P properties.

Question 1. For "Idea 2" to make practical sense, we need to find a smart way to limit the complexity of the models. Otherwise AI might just make any model contain arbitrary amounts of any objects. Can we find the right complexity restriction?

Question 2. Assume we resolved the previous question positively. What if "Idea 2" still produces an alien ontology humans don't care about? Can it happen?

Question 3. Assume everything works out. How do we know that this is a general method of solving the problem? We have an object in sense data (A), we care about the physical thing corresponding to it (B): how do we know B always behaves similarly to A and there are always more instances of B than of A?

One philosophical argument

I think there's a philosophical argument which allows to resolve Questions 2 & 3 (giving evidence that Question 1 should be resolvable too).

If the argument is true, the pointers problem should be solvable without Natural Abstraction hypothesis [? · GW] being true.

Anyway, I'll add a toy example which hopefully helps to better understand what's this all about.

One toy example

You're inside a 3D video game. 1st person view. The game contains landscapes and objects, both made of small balls (the size of tennis balls) of different colors. Also a character you control.

The character can push objects. Objects can break into pieces. Physics is Newtonian. Balls are held together by some force. Balls can have dramatically different weights.

Light is modeled by particles. Sun emits particles, they bounce off of surfaces.

The most unusual thing: as you move, your coordinates are fed into a pseudorandom number generator. The numbers from the generator are then used to swap places of arbitrary balls.

You care about pushing boxes (as everything, they're made of balls too) into a certain location.

...

So, the reality of the game has roughly 5 levels:

  1. The level of sense data (2D screen of the 1st person view).
  2. A. The level of ball structures. B. The level of individual balls.
  3. A. The level of waves of light particles. B. The level of individual light particles.

I think AI should be able to figure out that it needs to care about 2A level of reality. Because ball structures are much simpler to control (by doing normal activities with the game's character) than individual balls. And light particles are harder to interact with than ball structures, due to their speed and nature.


Explanation 2

An alternative explanation of my argument:

  1. Imagine activities which are crucial for a normal human life. For example: moving yourself in space (in a certain speed range); moving other things in space (in a certain speed range); staying in a single spot (for a certain time range); moving in a single direction (for a certain time range); having varied visual experiences (changing in a certain frequency range); etc. Those activities can be abstracted into mathematical properties of certain variables (speed of movement, continuity of movement, etc). Let's call them "fundamental variables". Fundamental variables are defined using sensory data or abstractions over sensory data.
  2. Some variables can be optimized (for a long enough period of time) by fundamental variables. Other variables can't be optimized (for a long enough period of time) by fundamental variables. For example: proximity of my body to my bed is an optimizable variable (I can walk towards the bed — walking is a normal activity); the amount of things I see is an optimizable variable (I can close my eyes or hide some things — both actions are normal activities); closeness of two particular oxygen molecules might be a non-optimizable variable (it might be impossible to control their positions without doing something weird).
  3. By default, people only care about optimizable variables. Unless there are special philosophical reasons to care about some obscure non-optimizable variable which doesn't have any significant effect on optimizable variables.
  4. You can have a model which describes typical changes of an optimizable variable. Models of different optimizable variables have different predictive power. For example, "positions & shapes of chairs" and "positions & shapes of clouds of atoms" are both optimizable variables, but models of the latter have much greater predictive power. Complexity of the models needs to be limited, by the way, otherwise all models will have the same predictive power.
  5. Collateral conclusions: typical changes of any optimizable variable are easily understandable by a human (since it can be optimized by fundamental variables, based on typical human activities); all optimizable variables are "similar" to each other, in some sense (since they all can be optimized by the same fundamental variables); there's a natural hierarchy of optimizable variables (based on predictive power). Main conclusion: while the true model of the world might be infinitely complex, physical things which ground humans' high-level concepts (such as "chairs", "cars", "trees", etc.) always have to have a simple model (which works most of the time, where "most" has a technical meaning determined by fundamental variables).

Formalization

So, the core of my idea is this:

  1. AI is given "P properties" which a variable of its world-model might have. (Let's call a variable with P properties P-variable.)
  2. AI searches for a world-model with the biggest amount of P-variables. AI makes sure it doesn't introduce useless P-variables. We also need to be careful with how we measure the "amount" of P-variables: we need to measure something like "density" rather than "amount" (i.e. the amount of P-variables contributing to a particular relevant situation, rather than the amount of P-variables overall?).
  3. AI gets an interpretable world-model (because P-variables are highly interpretable), adequate for defining what we care about (because by default, humans only care about P-variables).

How far are we from being able to do something like this? Are agent foundations researches pursuing this or something else?

0 comments

Comments sorted by top scores.