Horizontal vs vertical generality

post by tailcalled · 2023-04-29T19:14:35.632Z · LW · GW · 9 comments

Contents

9 comments

This is a note/definition I want to put out there. While the term may sound similar to breadth vs depth of knowledge, it is importantly distinct.

Let's define intelligence as the ability to process information and make decisions. (This is a very broad definition and includes not just humans but also e.g. most common computer programs.) General ability is intelligence that is applicable to many different tasks. Consider a key distinction in intelligence: horizontal generality vs vertical generality.

Humans could arguably be said to be on the simultaneous frontier of horizontal and vertical generality.

You might think horizontal vs vertical generality is the same as breadth vs depth of knowledge, but I don't think it is. The key distinction is that breadth vs depth of knowledge concerns fields of information, whereas horizontal vs vertical generality concerns tasks. Inputs vs outputs.

Some tasks may depend on multiple fields of knowledge, e.g. software development depends on programming capabilities and understanding user needs, which means that depth of knowledge doesn't guarantee vertical generality.

On the other hand, some fields of knowledge, e.g. math or conflict resolution, may give gains in multiple tasks, which means that horizontal generality doesn't require breadth of knowledge.

9 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2023-04-29T22:22:14.114Z · LW(p) · GW(p)

I feel that you are onto something here, but I have trouble coming up with an operationalization that is precise enough for me to make predictions based on your descriptions. But you clearly seem to have operations in mind. Can you help me? For me, it starts with "define intelligence as the ability to process information and make decisions" which is not concrete enough for me to have an ongoing model to use for the vertical vs horizontal operations.

Replies from: tailcalled
comment by tailcalled · 2023-04-30T07:53:15.588Z · LW(p) · GW(p)

I like to literally imagine a big list of tasks, along the lines of:

  1. Invent and deploy a new profitable AI system
  2. Build a skyscraper
  3. Form a cat, which catches and eats mice, mates and raises kittens
  4. etc.

An operationalization of horizontal generality would then be the number of tasks on the list that something can contribute to. For instance restricting ourselves to the first three items, a cat has horizontal generality 1, a calculator has horizontal generality 2, and a superintelligence has generality 3.

Within each task, we can then think of various subtasks that are necessary to complete it, e.g. for building a skyscraper, you need land, permissions, etc., and then you need to dig, set up stuff, pour concrete, etc. (I don't know much about skyscrapers, can you tell? 😅). Each of these subtasks need some physical interventions (which we ignore because this is about intelligence, though they may be relevant for evaluating the generality of robotics rather than of intelligence) and some cognitive processing. The fraction of the required cognitive subtasks that can be performed by an entity within a task is its vertical generality (within that specific task).

comment by hold_my_fish · 2023-04-30T07:22:40.870Z · LW(p) · GW(p)

This is relevant to a topic I have been pondering, which is what are the differences between current AI, self-improving AI, and human-level AI. First, brief definitions:

  • Current AI: GPT-4, etc.
  • Self-improving AI: AI capable of improving its own software without direct human intervention. i.e. It can do everything OpenAI's R&D group does, without human assistance.
  • Human-level AI: AI that can do everything a human does. Often called AGI (for Artificial General Intelligence).

In your framework, self-improving AI is vertically general (since it can do everything necessary for the task of AI R&D) but not horizontally general (since there are many tasks it cannot attempt, such as driving a car). Human-level AI, on the other hard, needs to be both vertically general and horizontally general, since humans are.

Here are some concrete examples of what self-improving AI doesn't need to be able to do, yet humans can do:

  • Motor control. e.g. Using a spoon to eat, driving a car, etc.
  • Low latency. e.g. Real-time, natural conversation.
  • Certain input modalities might not be necessary. e.g. The ability to watch video.

Even though this list isn't very long, lacking these abilities greatly decreases the horizontal generality of the AI.

Replies from: tailcalled
comment by tailcalled · 2023-04-30T08:39:34.534Z · LW(p) · GW(p)

In your framework, self-improving AI is vertically general (since it can do everything necessary for the task of AI R&D)

It might actually not be, it's sort of hard to be vertically general.

An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can't solve itself.

I think it makes sense to distinguish between a self-improving AI which can handle contract negotiations etc. in order to earn the money needed to make an income and buy electricity and hire people to handle its hardware, vs an AI that must be owned in order to achieve this.

That said a self-improving AI may still be more vertically general than other things. I think it's sort of a continuum.

Even though this list isn't very long, lacking these abilities greatly decreases the horizontal generality of the AI.

One thing that is special about self-improving AIs is that they are, well, self-improving. So presumably they either increase their horizontal generality, their vertical generality, or their cost-efficiency over time (or more likely, increase a combination of them).

Replies from: hold_my_fish
comment by hold_my_fish · 2023-04-30T12:20:44.104Z · LW(p) · GW(p)

An AI needs electricity and hardware. If it gets its electricity by its human creators and needs its human creators to actively choose to maintain its hardware, then those are necessary subtasks in AI R&D which it can't solve itself.

I think the electricity and hardware can be considered part of the environment the AI exists in. After all, a typical animal (like say a cat) needs food, water, air, etc. in its environment, which it doesn't create itself, yet (if I understood the definitions correctly) we'd still consider a cat to be vertically general.

That said, I admit that it's somewhat arbitrary what's considered part of the environment. With electricity, I feel comfortable saying it's a generic resource (like air to a cat) that can be assumed to exist. That's more arguable in the case of hardware (though cloud computing makes it close).

Replies from: tailcalled
comment by tailcalled · 2023-04-30T13:15:17.805Z · LW(p) · GW(p)

I think there's a distinction between the environment being in ~equillibrium and you wrestling a resource out from the equllibrium, versus you being part of a greater entity which wrestles resources out from the equillibrium and funnels them to your part?

Replies from: hold_my_fish
comment by hold_my_fish · 2023-04-30T22:01:34.855Z · LW(p) · GW(p)

That's a good point, though I'd word it as an "uncaring" environment instead. Let's imagine though that the self-improving AI pays for its electricity and cloud computing with money, which (after some seed capital) it earns by selling use of its improved versions through an API. Then the environment need not show any special preference towards the AI. In that case, the AI seems to demonstrate as much vertical generality as an animal or plant.

Replies from: tailcalled
comment by tailcalled · 2023-05-01T08:24:34.433Z · LW(p) · GW(p)

That seems reasonable to me.

comment by watermark (w̵a̵t̸e̶r̴m̷a̸r̷k̷) · 2023-04-30T00:49:08.796Z · LW(p) · GW(p)

I've used the same terms (horizontal and vertical generality) to refer to (what I think) are different concepts than what's discussed here, but wanted to share my versions of these terms in case there's any parallels you see

Horizontal generality: An intelligence's ability to take knowledge/information learned from an observation/experience solving a problem and use it to solve other similarly-structured/isomorphic problems (e.g. a human notices that a problem in finding optimal routing can be essentially mapped to a graph theory problem and solving one solves the other) 
 

Vertical generality: An intelligence's ability to use their existing knowledge to augment their own intelligence with tools or by successfully designing smarter agents aligned to it (e.g. a human is struggling with solving problems in quantum mechanics, and no amount of effort is helping them. They find an alternative route to solving these problems by learning how to create aligned superintelligence which helps them solve the problems)

 

If you're an intelligence solving problems, increasing horizontal generality helps because it lets you see how problems you've already solved actually apply to problems you didn't think they applied to before you increased horizontal generality. Increasing vertical generality helps because it finds an alternative route to solving the problem by actually increasing your effective problem solving ability.