If all of the things that humans can do are required for knitting and driving cars, than there are two things that humans can do, generalized to that level. If an AI could learn the hard way to drive and to knit, it would be able to do everything a human could do. I estimate that controlling vehicles is about four different skills by that definition (road vehicles, fixed-wing, rotary-wing, and reaction-mass spacecraft), but knitting, crocheting, and sewing are the same skill, and there are probably only two or three different skills that cover all of athletics (an AI that could learn to play football would probably be able to learn curling, but it might not be able to learn gymnastics or swiming)
comment by Discredited
· score: 0 (0 votes) · LW
AIs haven't demonstrated that they can learn to do anything the hard way
The tasks pointed out so far (driving, knitting, sports) all have a huge mechanical component, where a body and its parts (the wheels of the car, the needle, whatever actuators your athletes have) are steered to make certain motions. Tasks like that use control theory, maybe with a learned kinematic model, and some reinforcement learning.
But that's a shallow representation of those tasks. There is much more to each of them than the motions. For instance with real car driving, you want a ton of sensors like accelerometers and cameras on your vehicle, because the state of a car is much higher than, say, the length a hydraulic piston has been extended, and so you need much more data to establish what state the car is in and what changes you need to put the car in a better state, so that it's not rocking horribly on a gravel road or running over shrubs and pedestrians, et cetera. Having lots of sensors means you want high dimensional sensor fusion, which means a fancy probabilistic model. Also, the state of the car can depend on weird abstract properties of the environment, like the shape of squiggly black lines on a metal sign you passes a few minutes ago. So you also want machine vision algorithms that can recognize percepts in the wild that match prior learned patters like "15 mph when children are present".
And that's just the kind of math you want for an AI to know what is happening outside. It doesn't account for guessing a 3d map of solid things in the world, things that you don't want to crash into, or planning paths through a labyrinthine city, or not swerving so hard that your passengers get whiplash.
I think some of that complexity might be being looked over, since all of the tasks mentioned are so very mechanical on the surface of things.
For simple mechanical problems, I think it's absolutely fair to say that AI has learned to do things the hard way. We have AI math that lets robots learn the shapes of their bodies, or that let them flail around trying different motor activation strategies until they find one that moves the robot forward efficiently. Or this toy robotics task, the acrobot, you might have seen before: 1. Robots can learn motions like that given a reward function that favors standing upright.
You might say, as people often do, with merit, that the robot isn't being creative, it's just executing a programmed algorithm. This argument proves too much, because it doesn't allow "real creativity" for psuedo-deterministic system like humans, but it still has a shred of truth. A robot that learns to control its motors doesn't learn to learn to control its motors. It doesn't study the math of control theory or the physics of kinematics or the engineering of building cheap, robust power transfer systems.
This is not a buck that can be passed indefinitely. At some point, it is right and proper to say the source of AI's power is humans, since AI are not writing themselves into existence out of the aether. But it is a buck that can be passes a few times. This is one major sense in which current AI is narrow. Not that AI can't do much - you can read pages and reams of citations of successful AI applications. Even if you look at the AI industry as a whole today, I think it should rightly be called Narrow, since AI, by which I basically mean the sum total of written algorithms and software, doesn't have a reflective ability to apply its intelligence toward increasing its intelligence, looking back to mathematics and physics and values (of which it has little because the world is dragging its feet on preference modelling) to figure out how to make good things happen.
Anyway, robotics has some cute examples of "learning the hard way". But I've been hinting at a much large space of problem domains and and problem solving methods. What other tasks are there, in broad strokes, and how much of existent problem solving techniques proceed, how shall we say, searching through a huge space of candidate solutions - a space which the programmer barely constrained by building in his own clever ideas?
I'm not going to write what kinds of problems exist. That's a fine topic for academic curiosity, and is probably relevant to quantifying which problem domains are low hanging fruit for human-designed algorithms and which are more difficult for us to solve - those merciful bottlenecks that's we'll slowly chip away at till we reach an intelligence explosion that ruins the galaxy absent urgent work in preference modelling. But I don't think the space of problems, those that are interesting to humans and non-trivial to computers, is very important to characterize for the general audience that might be reading this comment. I've already hinted at some of its structure with mentions of machine learning and path finding. Many other kinds of reasoning, planning, designing surely come to mind given a little thought.
What I would like to point out is that, once we leave robotics and explore other kinds of "things learned the hard way", human intuitions about agency go a little fuzzy. When AI doesn't have a conventional body, with joints and motors and sensors or with squishy muscles and contractile proteins and retinas, or whatever, it's not as easy to see that there's a "person" there, something that humans reason about using our built in Theory of Mind circuitry, rather than something inanimate.
When an AI doesn't have a body, and isn't stumbling around the floor and stacking blocks and smiling when you wave, when instead candidate solutions can be examined symbolically in computer memory, and design can be as simple as randomly casting about around a formally specified search space until it lands on the Pareto frontier....When that happens, it looks much less to an external human observer that there is a "person" "learning" "things". And this naive perspective is grossly inaccurate, because it is the algorithms and not the smiles that are formidable.
Hm. Long comment. I should say something poignant now. Summary: 1) Many "mechanical" problems are really cross domain problems with lots of hidden complications. 2) AI can learn things the hard way, really they can. 3) But keep in mind that learning to solve problems doesn't always look like a body gathering knowledge and experimenting in the real world.