Automated intelligence is not AI

post by KatjaGrace · 2020-11-01T23:30:01.715Z · LW · GW · 10 comments

Crossposted from world spirit sock puppet.

Sometimes we think of ‘artificial intelligence’ as whatever technology ultimately automates human cognitive labor.

I question this equivalence, looking at past automation. In practice human cognitive labor is replaced by things that don’t seem at all cognitive, or like what we otherwise mean by AI.

Some examples:

  1. Early in the existence of bread, it might have been toasted by someone holding it close to a fire and repeatedly observing it and recognizing its level of doneness and adjusting. Now we have machines that hold the bread exactly the right distance away from a predictable heat source for a perfect amount of time. You could say that the shape of the object embodies a lot of intelligence, or that intelligence went into creating this ideal but non-intelligent tool.
  2. Self-cleaning ovens replace humans cleaning ovens. Humans clean ovens with a lot of thought—looking at and identifying different materials and forming and following plans to remove some of them. Ovens clean themselves by getting very hot.
  3. Carving a rabbit out of chocolate takes knowledge of a rabbit’s details, along with knowledge of how to move your hands to translate such details into chocolate with a knife. A rabbit mold automates this work, and while this route may still involve intelligence in the melting and pouring of the chocolate, all rabbit knowledge is now implicit in the shape of the tool, though I think nobody would call a rabbit-shaped tin ‘artificial intelligence’.
  4. Human pouring of orange juice into glasses involves various mental skills. For instance, classifying orange juice and glasses and judging how they relate to one another in space, and moving them while keeping an eye on this. Automatic orange juice pouring involves for instance a button that can only be pressed with a glass when the glass is in a narrow range of locations, which opens an orange juice faucet running into a spot common to all the possible glass-locations.

Some of this is that humans use intelligence where they can use some other resource, because it is cheap on the margin where the other resource is expensive. For instance, to get toast, you could just leave a lot of bread at different distances then eat the one that is good. That is bread-expensive and human-intelligence-cheap (once you come up with the plan at least). But humans had lots of intelligence and not much bread. And if later we automate a task like this, before we have computers that can act very similarly to brains, then the alternate procedure will tend to be one that replaces human thought with something that actually is cheap at the time, such as metal.

I think a lot of this is that to deal with a given problem you can either use flexible intelligence in the moment, or you can have an inflexible system that happens to be just what you need. Often you will start out using the flexible intelligence, because being flexible it is useful for lots of things, so you have some sitting around for everything, whereas you don’t have an inflexible system that happens to be just what you need. But if a problem seems to be happening a lot, it can become worth investing the up-front cost of getting the ideal tool, to free up your flexible intelligence again.

10 comments

Comments sorted by top scores.

comment by Richard Korzekwa (Grothor) · 2020-11-01T23:56:12.181Z · LW(p) · GW(p)

It seems to me that there is an important distinction here between "the thing that replaces human cognitive labor" and "the thing that automates human cognitive labor". For example, a toaster might eliminate the need for human cognitive labor or replace it with a non-cognitive task that accomplishes the same goal, but it does not automate that labor. A machine that automates the cognitive labor normally involved in toasting bread over a fire would need to make decisions about where to hold the bread, when to turn it, and when it is finished toasting, by having access to information about how the bread is doing, how hot the fire is, etc. Or maybe people are using these phrases differently than I am expecting?

Replies from: Kenny
comment by Kenny · 2020-11-05T23:51:19.605Z · LW(p) · GW(p)

I was also a little confused about the post's initial claim:

Sometimes we think of ‘artificial intelligence’ as whatever technology ultimately automates human cognitive labor.

Maybe I haven't encountered thoughts like that much, or I've been reading too much David Chapman, but my immediate thought after reading the above was something like 'most human cognitive labor isn't automated by technology but obviated, usually by engineering the relevant environment'.

comment by avturchin · 2020-11-02T11:04:40.243Z · LW(p) · GW(p)

One more example is "multiplication table".

comment by Viliam · 2020-11-02T20:21:58.941Z · LW(p) · GW(p)

Another example: books. Especially textbooks. We have something that can teach humans, explain things, and yet is not intelligent itself.

comment by AnthonyC · 2020-11-02T12:13:55.882Z · LW(p) · GW(p)

Someone's cognitive labor went into making the rabbit mold, and everything from there on out is eliminating the need to repeat that labor, and to reduce the number of people who need to have that knowledge. But somewhere, someone had to do enough cognitive labor to design and make the mold (or the mold-making process and equipment).

If instead there were a machine to which I could say "I want a rabbit mold for chocolate," and it would produce one without further human intervention, I'd have to reach farther to argue it isn't AI. Narrow AI, if it is scouring ads online for "mold," "chocolate," "rabbit," and suggesting a top contender, more general AI if it is a mold-designing program with previous examples of "rabbit" molds and "chocolate" molds, and pretty darn general if it begins by scouring a library of information about chocolatiers, lagomorphs, Easter, materials science, human vision, commercials for existing chocolate rabbits, and cartoons and children's books with rabbit characters.

Replies from: KatjaGrace
comment by KatjaGrace · 2020-11-02T20:10:12.487Z · LW(p) · GW(p)

>Someone's cognitive labor went into making the rabbit mold, and everything from there on out is eliminating the need to repeat that labor, and to reduce the number of people who need to have that knowledge.

Yeah, that's the kind of thing I had in mind in the last paragraph.

comment by adamShimi · 2020-11-04T12:16:42.340Z · LW(p) · GW(p)

This makes me think of Daniel Dennett's reasons. What he argues is that many systems have reasons to do what they do, that come from design, where design can either be explicit design by humans or implicit design by stuff like evolution. And the big idea coming back to Darwing is that you can have reasons without comprehension: a system (for example an animal or the rabbit mold) can have reasons for specific behaviors that were designed (by evolution or by humans), without the ability to understand these reasons and adapt the behavior in consequence.

Yet this hard-coding of reasons seems like it has limits. Or more generally, for systems that needs to adapt to a wide enough range of tasks, adding understanding (comprehension) to the mix is way more efficient. So I do expect something like transformative AI to have some modicum of flexible intelligence, if only because this will make it better at the really complex tasks (like language) than the competition.

Replies from: Kenny
comment by Kenny · 2020-11-06T01:10:03.088Z · LW(p) · GW(p)

Yes, that's a great intuition pump of his – the 'intentional stance', i.e. many systems act as if they had reasons or purpose.

Replies from: adamShimi
comment by adamShimi · 2020-11-06T12:22:29.459Z · LW(p) · GW(p)

Actually, I would say that this is more about the design stance than the intentional stance. For example, the rabbit mold becomes easier to understand by going from the physical stance to the design stance, but not by going from the design stance to the intentional stance.

That being said, Dennett is pretty liberal with his definition of intentional systems, which can encompass pretty much anything that can be predicted through the intentional stance (whether it's useful compared to the design stance or not).

But to go back to the topic of the post, even Dennett kind of agrees that systems without comprehension, or with a limited amount of comprehension, of their reasons, are less intentional that systems that understand their own reasons.

Replies from: Kenny
comment by Kenny · 2020-11-09T04:10:55.057Z · LW(p) · GW(p)

I didn't understand (remember) Dennett's distinctions between the design and intentional stances. I was thinking that design is a feature or part of intentional systems, e.g. a rabbit mold or the intricate structure of a (living) rabbit's leg. They both seem to be for some purpose.

After skimming the Wikipedia article on the intentional stance I realized I was thinking of 'design stance' as you correctly pointed out.

Maybe I was conflating the two because of the idea that a sufficiently complicated design might seem (or even be usefully modeled as) intentional? Like thinking of Nature as an intentional system designing rabbits (and people that then design rabbit molds).