No Abstraction Without a Goal

post by dkirmani · 2022-01-10T20:24:02.879Z · LW · GW · 27 comments

Contents

27 comments

You might argue that "abstraction" only means preserving information used to predict faraway [LW · GW] observations from a given system. However, coarse modeling of distant objects is often a convergent subgoal of the kind of organisms that Nature selects for.

The scout does not tell the general about the bluejays he saw. He reports the number of bombers in the enemy's hangar. Condensation of information always selects for goal-relevant information. Any condensation of information implies that the omitted information is less goal-relevant than the reported information; there is no abstraction without a goal.

27 comments

Comments sorted by top scores.

comment by Gordon Seidoh Worley (gworley) · 2022-01-11T00:43:42.141Z · LW(p) · GW(p)

I think this is exactly right, because without some goals, purpose, or even just a norm to be applied, there's nothing to power knowing [LW · GW] anything since knowing is at its ground about picking and choosing what goes into what category to separate out the world into things.

comment by Joe Collman (Joe_Collman) · 2022-01-10T22:28:58.605Z · LW(p) · GW(p)

Probably it makes sense to emphasize that it's the selection of the abstraction that implies a goal, not the use of the abstraction. If an abstraction shows up in an optimised thing, that's evidence that whatever optimised it had a goal.

Replies from: dkirmani
comment by dkirmani · 2022-01-10T22:38:00.533Z · LW(p) · GW(p)

That's true. But do abstractions ever show up in non-optimized things? I can't think of a single example.

Replies from: Joe_Collman
comment by Joe Collman (Joe_Collman) · 2022-01-10T23:13:47.286Z · LW(p) · GW(p)

The set of things not influenced by any optimisation process is pretty small - so we'd probably have to be clearer in what counts as "non-optimized". (I'm also not sure I'd want to say that selection processes need to have a 'goal' exactly.)

It strikes me that the argument you're making might not say much about abstraction specifically - unless I'm missing something essential, it'd apply to any a-priori-unlikely configuration of information.

Replies from: dkirmani
comment by dkirmani · 2022-01-10T23:37:26.133Z · LW(p) · GW(p)

The set of things not influenced by any optimisation process is pretty small - so we'd probably have to be clearer in what counts as "non-optimized". (I'm also not sure I'd want to say that selection processes need to have a 'goal' exactly.)

Both good points. "Goal" isn't the best word for what selection processes move towards.

It strikes me that the argument you're making might not say much about abstraction specifically - unless I'm missing something essential, it'd apply to any a-priori-unlikely configuration of information.

Besides just being an unlikely configuration of information, abstractions destroy sensory information that did not previously have much of a bearing on actions that increased fitness (or is "selection stability" a better term?).

comment by Shmi (shminux) · 2022-01-10T21:39:01.521Z · LW(p) · GW(p)

Abstraction is a compression algorithm for a computationally bounded agent, I don't see how it is related to a "goal", except insofar as a goal is just another abstraction, and they all have to work together for the agent to have a reasonable level of fidelity of the internal map of the world.

Replies from: dkirmani, TAG
comment by dkirmani · 2022-01-10T21:52:19.852Z · LW(p) · GW(p)

Yes, abstraction is compression, but real-world abstractions (like trees, birds, etc.) are very lossy forms of compression. When performing lossy compression, you need to ask yourself what information you value.

When compressing images, for example, humans usually don't care about the values of the least-significant bits, so you can round all 8-bit RGB intensity values down to the nearest even number and save yourself 3 bits per pixel in exchange for a negligible degradation in subjective image quality. Humans not caring about the least-significant bit is useful information about your goal, which is to compress an image for someone to look at.

Replies from: philh
comment by philh · 2022-01-16T21:39:45.743Z · LW(p) · GW(p)

I think it's not a coincidence that the high-order bits are the ones that are preserved by more physical processes. Like, if you take two photos of the same thing, the high order bits are more likely to be the same than the low order bits. Or if you take a photo of a picture on a screen or printed out. Or if you dye two pieces of fabric in the same vat.

I'm not saying you couldn't get an agent that cared about the low-order bits and not the high-order bits, and if you did have such an agent maybe it would find abstractions that we wouldn't. But I don't think I'm being parochial when I say that would be a really weird agent.

comment by TAG · 2022-01-11T20:54:43.628Z · LW(p) · GW(p)

The argument given by the OP seems valid to me ... that is the reason to believe that abstractions relate to goals.

Goals are not abstractions in the sense of compressions if an existing territory. When Kennedy asserted a goal to put a man on the moon, he was not representing something that was already true

comment by tailcalled · 2022-01-11T09:25:52.773Z · LW(p) · GW(p)

The scout does not tell the general about the bluejays he saw. He reports the number of bombers in the enemy's hangar.

Number of enemy bombers seems more relevant than number of bluejays for predicting most far-future variables to me? E.g. who will control the land, how much damage there will be to existing things in the area, etc.. Maybe it's because I'm not an ornithologist so I don't know anything about bluejays. But I'd think humans tend to be the dominant force in influencing an area with bluejays exerting only negligible influence.

Replies from: dkirmani
comment by dkirmani · 2022-01-11T10:37:02.466Z · LW(p) · GW(p)

This implies that you care about things like "who owns the land", "are the buildings intact", et cetera. The information you care about leaks information about your values.

Replies from: tailcalled
comment by tailcalled · 2022-01-11T10:54:46.119Z · LW(p) · GW(p)

"Who owns the land" has influences on many far away variables, as those who own the land can implent policies about what to do with the land. Similarly, "Are the buildings intact?" has influences on many far away variables, because it determines whether the people who live on the land continue to live on the land, and people who live in a place are the ones who influence the place the most.

If I wanted to understand the long-term future of an area that was currently at war, I'd want to know the information relevant for who wins the war and how destructive the war is, as that has a lot of effects. Meanwhile I don't know of any major effects of bluejays.

comment by Pattern · 2022-01-11T23:59:14.622Z · LW(p) · GW(p)
there is no abstraction without a goal.

This isn't immediately obvious. What goal is necessary for 'trees' (in general) as opposed to individual trees?

Replies from: dkirmani
comment by dkirmani · 2022-01-12T03:55:31.659Z · LW(p) · GW(p)

There are lots of goals that are helped by having the abstraction "tree", like "run to the nearest tree and climb it in order to escape the charging rhino". My point was that the set of goals that are helped by having the abstraction "tree" is smaller than the set of all possible goals, so if we know that the abstraction "tree" is useful to you, we have more information about your goals.

comment by Maxwell Peterson (maxwell-peterson) · 2022-01-11T15:42:07.862Z · LW(p) · GW(p)

Good point! Hadn't thought of it this way before but totally agree

comment by Slider · 2022-01-11T13:21:50.665Z · LW(p) · GW(p)

Being a stricler for generalization I could believe that for any naturally occurring abstraction there is a goal behind it in a "no smoke without fire" kind of way. However if you bruteforce through all the possible ways to abstract I am less sure that those variants that do not have natural occurencies have an associated goal. For example what is the goal of an abstraction that includes what bombers and bluejays include?

Replies from: dkirmani
comment by dkirmani · 2022-01-12T03:45:35.472Z · LW(p) · GW(p)

I am less sure that those variants that do not have natural occurencies have an associated goal.

The abstractions that do not occur naturally do not prioritize fitness-relevant information. You could conceive of goals that they serve, but these goals are not obviously subgoals of fitness-maximization.

comment by Ericf · 2022-01-10T22:59:03.305Z · LW(p) · GW(p)

This seems tautological? If the military scout returns with reports of birds and ants, that is still an abstraction, but it isn't relevant to the goal (as those words are commonly used). You seem to be defining a goal in terms of what the abstraction retains.

Replies from: TAG, dkirmani
comment by TAG · 2022-01-14T15:25:41.714Z · LW(p) · GW(p)

You can make the claim that a useful abstraction must serve a purpose, without making the claim that all abstractions are useful.

comment by dkirmani · 2022-01-10T23:13:59.335Z · LW(p) · GW(p)

If the military scout returns with a poem about nature, then yes, that's still an abstraction. The scout's abstraction prioritizes information that is useless to the general's goals, so we can guess that the scout's goals are not well aligned with the general's.

You seem to be defining a goal in terms of what the abstraction retains.

I'm not sure if it's possible to fully specify goals given abstractions. But for a system subject to some kind of optimization pressure, knowing an abstraction that the system uses is evidence that shifts probability mass within goal-space.

Replies from: Ericf
comment by Ericf · 2022-01-11T04:26:52.909Z · LW(p) · GW(p)

Perhaps the scout had a goal of "provide a list of wildlife"

The abstractions used (specific sound waves standing in for concepts of animals, grouping similar animals together under single headings etc.) are still orthogonal to that goal. They are in service to a narrow goal of "communicate the concept in my brain to yours" but that answer gets you a strike on the Family Feud prompt "Name a goal of a wilderness scout."

Replies from: dkirmani
comment by dkirmani · 2022-01-11T05:38:50.706Z · LW(p) · GW(p)

"Provide a list of wildlife" has subgoal "communicate the concept in my brain to yours" has subgoal "use specific sounds to represent animals". "Provide a list of wildlife" is not a subgoal of "win the war".

Replies from: Pattern
comment by Pattern · 2022-01-12T00:05:04.106Z · LW(p) · GW(p)

Of wildlife and winning the war:

The supply train is delayed by an enemy attack. Shoring up supplies might be achieved by:

  • taking resources from the enemy.
  • hunting wildlife

Seemingly anything can be related to any goal, under some circumstance. "Provide a list of wildlife" might indicate whether there's anything (or a lot of things) that can potentially be hunted. It can also indicate whether the enemy can subsist off wildlife if they are good at hunting.

Replies from: dkirmani
comment by dkirmani · 2022-01-12T03:22:59.003Z · LW(p) · GW(p)

Yes, I spoke too strongly. In the weighted causal graph of subgoals, I would bet that "provide a list of wildlife" would be less relevant to the goal "win the war" than "report #bombers".

Replies from: Pattern
comment by Pattern · 2022-01-12T03:35:11.433Z · LW(p) · GW(p)

My point was less about weight, and more about conditions that make it relevant. Yes, this might treat relevant/not as a binary, but it is an abstraction related to action, for example:

'orders are about a focus* (while someone scouting may act responsively to changing conditions)'. Arguably, scouting is open ended - the scout knows what might be important (at least if they see it). How things are done in practice here might be worth looking into.

*I'm making this up. The point is, actions can also throw stuff out.

comment by Adam Shai (adam-shai) · 2022-01-11T18:00:30.502Z · LW(p) · GW(p)

"Condensation of information always selects for goal-relevant information." To me this seems either not true, or it generalizes the concept of "goal-relevant" so broadly that it doesnt seem useful to me. If one is actively trying to create abstractions that are useful to achieving some goal then it is true. But the general case of losing information need not be towards some goal. For instance, it's easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it's useful seems like a different issue.

One might say that they are interested in abstractions in the case they are useful. They might also make an emperical claim (or a stylistic choice) that thinking about abstractions in the framework of goal-directed actions will be a fruitful way to do AI, study the brain, etc. etc., but these are emperical claims that will be borne out in how useful different research programs help us understand things, and are not a statement of fact as far as I can tell.

You might also reply to this, "no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction" but then the claim that an abstraction only exists with goal-relevance seems tautilogical.

Replies from: dkirmani
comment by dkirmani · 2022-01-12T03:49:20.672Z · LW(p) · GW(p)

For instance, it's easy to construct a lossy map that takes high dimensional data to low dimensional data, whether or not it's useful seems like a different issue.

Yep. Most such maps are useless (to you) because the goals you have occupy a small fraction of the possible goals in goal-space.

You might also reply to this, "no, condensation of information without goal-relevance is just condensation of information, but it is not an abstraction" but then the claim that an abstraction only exists with goal-relevance seems tautilogical.

Nope, all condensation of information is abstraction. Different abstractions imply different regions of goal-space are more likely to contain your goals.