[LINK]s: Who says Watson is only a narrow AI?
post by Shmi (shminux) · 2013-05-21T18:04:12.240Z · LW · GW · Legacy · 27 commentsContents
27 comments
OK, so it covers only a few human occupations:
- Trivia games (we all know about that one)
- Clinical diagnosis
- Banking advisor
- and now a call center grunt
But the list is steadily growing.
Now, connect it with a self-driving AI, and your cab e-driver can make small talk, advise on a suspicious skin lesion, evaluate your investment portfolio and help you fix an issue with your smartphone, all while cheaply and efficiently getting you to your destination.
How long until it can evaluate verbal or written customer requirements and write better routine software than your average programmer?
27 comments
Comments sorted by top scores.
comment by dhoe · 2013-05-21T19:34:17.384Z · LW(p) · GW(p)
I've spent a bit of time trying to understand what Watson does, and couldn't find a clear answer. I'd really appreciate a concise technical explanation.
What I got so far is that it runs a ton of different algorithms and combines the results in some sort of probabilistic reasoning to make a bet on the most likely correct answer. Is that roughly correct? And what are those algorithms then?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2013-05-22T10:03:04.931Z · LW(p) · GW(p)
Did you see this summary? (The actual description of the system starts on page 9.)
EDIT: Also, the list of papers citing that article may provide papers with further detail. For example, that list contained Question analysis: How Watson reads a clue, which goes into considerably more detail about the question analysis stage.
Replies from: dhoecomment by Morendil · 2013-05-21T20:58:09.591Z · LW(p) · GW(p)
write better routine software than your average programmer
The average programmer already emulates the Watson algorithm - search Google for answers to "how do I" (sort a list, create a new Qt window, rotate a cube in OpenGL) and slap together any likely-looking chunks of code in a language that might compile. It's even automated already.
The only problem is, of course, GIGO.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-21T22:39:07.130Z · LW(p) · GW(p)
I say Watson is only a narrow AI.
Replies from: MugaSofer, shminux↑ comment by Shmi (shminux) · 2013-05-21T23:02:09.518Z · LW(p) · GW(p)
:)
But isn't it getting too wide too quickly?
Anyway, I am guessing that by your definition the difference between a narrow and a general AI is not the number of problem solving or reasoning tasks where it is as good as or better than humans, even if it's the vast majority of these tasks, but having a "general, flexible learning ability that would let them tackle entirely new domains", i.e. being vastly better than an average single human being, who generally sucks at adapting to "new domains".
Replies from: Manfred↑ comment by Manfred · 2013-05-22T00:24:59.312Z · LW(p) · GW(p)
the number of problem solving or reasoning tasks where it is as good as or better than humans
What, four?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-22T01:32:10.551Z · LW(p) · GW(p)
Sigh. Charitable reading is not a strong suit of this place. Not four. How about 100? 1000? Would that be enough?
Replies from: Manfred↑ comment by Manfred · 2013-05-22T01:35:10.470Z · LW(p) · GW(p)
Sorry, the silliness too tempting. But however you want to count the number of things that go into what Watson does, it really is a small portion of things humans can do.
Replies from: Decius↑ comment by Decius · 2013-05-22T04:52:13.596Z · LW(p) · GW(p)
What exponent is on the number of things that humans can do, generalized to the degree of "drive cars"?
Replies from: Manfred↑ comment by Manfred · 2013-05-22T14:31:58.466Z · LW(p) · GW(p)
Hm, tough question. One way to get a quick lower bound might be "what's something that uses the same general skills as driving cars, but is very different, and in how many ways is it different?" So if we use the same spatial skills to do knitting, and we say it's different from driving in a car in about 10 ways (where what we consider a "way" sets our scale), then there are at least 2^10 things that use the same skills as, but are different from, knitting and driving cars (among other bad assumptions, this assumes that two things being alike in some way is binary and transitive). If there are 10 domains (everything is approximately 10) like "spatial skills and spatial planning, basic motor coordination," then the lower bound would be more like 2^100.
Replies from: Decius↑ comment by Decius · 2013-05-23T01:30:19.771Z · LW(p) · GW(p)
If all of the things that humans can do are required for knitting and driving cars, than there are two things that humans can do, generalized to that level. If an AI could learn the hard way to drive and to knit, it would be able to do everything a human could do. I estimate that controlling vehicles is about four different skills by that definition (road vehicles, fixed-wing, rotary-wing, and reaction-mass spacecraft), but knitting, crocheting, and sewing are the same skill, and there are probably only two or three different skills that cover all of athletics (an AI that could learn to play football would probably be able to learn curling, but it might not be able to learn gymnastics or swiming)
I think that our existing AIs haven't demonstrated that they can learn to do anything the hard way. I could be wrong, because I don't have any deep insight into if existing AIs learn or are created with full knowledge.
Replies from: Discredited↑ comment by Discredited · 2013-05-24T11:12:04.464Z · LW(p) · GW(p)
AIs haven't demonstrated that they can learn to do anything the hard way
The tasks pointed out so far (driving, knitting, sports) all have a huge mechanical component, where a body and its parts (the wheels of the car, the needle, whatever actuators your athletes have) are steered to make certain motions. Tasks like that use control theory, maybe with a learned kinematic model, and some reinforcement learning.
But that's a shallow representation of those tasks. There is much more to each of them than the motions. For instance with real car driving, you want a ton of sensors like accelerometers and cameras on your vehicle, because the state of a car is much higher than, say, the length a hydraulic piston has been extended, and so you need much more data to establish what state the car is in and what changes you need to put the car in a better state, so that it's not rocking horribly on a gravel road or running over shrubs and pedestrians, et cetera. Having lots of sensors means you want high dimensional sensor fusion, which means a fancy probabilistic model. Also, the state of the car can depend on weird abstract properties of the environment, like the shape of squiggly black lines on a metal sign you passes a few minutes ago. So you also want machine vision algorithms that can recognize percepts in the wild that match prior learned patters like "15 mph when children are present".
And that's just the kind of math you want for an AI to know what is happening outside. It doesn't account for guessing a 3d map of solid things in the world, things that you don't want to crash into, or planning paths through a labyrinthine city, or not swerving so hard that your passengers get whiplash.
I think some of that complexity might be being looked over, since all of the tasks mentioned are so very mechanical on the surface of things.
For simple mechanical problems, I think it's absolutely fair to say that AI has learned to do things the hard way. We have AI math that lets robots learn the shapes of their bodies, or that let them flail around trying different motor activation strategies until they find one that moves the robot forward efficiently. Or this toy robotics task, the acrobot, you might have seen before: 1. Robots can learn motions like that given a reward function that favors standing upright.
You might say, as people often do, with merit, that the robot isn't being creative, it's just executing a programmed algorithm. This argument proves too much, because it doesn't allow "real creativity" for psuedo-deterministic system like humans, but it still has a shred of truth. A robot that learns to control its motors doesn't learn to learn to control its motors. It doesn't study the math of control theory or the physics of kinematics or the engineering of building cheap, robust power transfer systems.
This is not a buck that can be passed indefinitely. At some point, it is right and proper to say the source of AI's power is humans, since AI are not writing themselves into existence out of the aether. But it is a buck that can be passes a few times. This is one major sense in which current AI is narrow. Not that AI can't do much - you can read pages and reams of citations of successful AI applications. Even if you look at the AI industry as a whole today, I think it should rightly be called Narrow, since AI, by which I basically mean the sum total of written algorithms and software, doesn't have a reflective ability to apply its intelligence toward increasing its intelligence, looking back to mathematics and physics and values (of which it has little because the world is dragging its feet on preference modelling) to figure out how to make good things happen.
Anyway, robotics has some cute examples of "learning the hard way". But I've been hinting at a much large space of problem domains and and problem solving methods. What other tasks are there, in broad strokes, and how much of existent problem solving techniques proceed, how shall we say, searching through a huge space of candidate solutions - a space which the programmer barely constrained by building in his own clever ideas?
I'm not going to write what kinds of problems exist. That's a fine topic for academic curiosity, and is probably relevant to quantifying which problem domains are low hanging fruit for human-designed algorithms and which are more difficult for us to solve - those merciful bottlenecks that's we'll slowly chip away at till we reach an intelligence explosion that ruins the galaxy absent urgent work in preference modelling. But I don't think the space of problems, those that are interesting to humans and non-trivial to computers, is very important to characterize for the general audience that might be reading this comment. I've already hinted at some of its structure with mentions of machine learning and path finding. Many other kinds of reasoning, planning, designing surely come to mind given a little thought.
What I would like to point out is that, once we leave robotics and explore other kinds of "things learned the hard way", human intuitions about agency go a little fuzzy. When AI doesn't have a conventional body, with joints and motors and sensors or with squishy muscles and contractile proteins and retinas, or whatever, it's not as easy to see that there's a "person" there, something that humans reason about using our built in Theory of Mind circuitry, rather than something inanimate.
When an AI doesn't have a body, and isn't stumbling around the floor and stacking blocks and smiling when you wave, when instead candidate solutions can be examined symbolically in computer memory, and design can be as simple as randomly casting about around a formally specified search space until it lands on the Pareto frontier....When that happens, it looks much less to an external human observer that there is a "person" "learning" "things". And this naive perspective is grossly inaccurate, because it is the algorithms and not the smiles that are formidable.
Hm. Long comment. I should say something poignant now. Summary: 1) Many "mechanical" problems are really cross domain problems with lots of hidden complications. 2) AI can learn things the hard way, really they can. 3) But keep in mind that learning to solve problems doesn't always look like a body gathering knowledge and experimenting in the real world.
comment by Wei Dai (Wei_Dai) · 2013-05-22T22:14:08.964Z · LW(p) · GW(p)
What is the point of debating whether Watson should be called "general" or "narrow"? Do you think people who call Watson "narrow" are wrong in some substantial way (e.g., have wrong expectations or plans)? If so how?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-22T22:24:01.901Z · LW(p) · GW(p)
Fair question. On this forum narrow AI = not really intelligent and benign, while general AI = potentially smarter than humans, ready to FOOM and dangerous. My point was that Watson might be some day providing an example of a smarter-than-human but benign (not FOOMable) AI, depending on how it is designed.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-05-23T06:05:17.252Z · LW(p) · GW(p)
I think I understand your point, and have preemptively written a response at http://lesswrong.com/lw/bob/reframing_the_problem_of_ai_progress/. (In short, if Watson becomes smarter-than-human in many domains, it seems inevitable that the technological progress involved will be useful for building FOOMable AIs, even if Watson isn't itself FOOMable.) If this doesn't address your point, then I've probably misunderstood it, in which case maybe you can restate it in more detail?
comment by [deleted] · 2013-05-21T19:22:39.338Z · LW(p) · GW(p)
Weren't expert systems good at those kinds of things for several decades?
Replies from: Vaniver↑ comment by Vaniver · 2013-05-21T20:44:20.413Z · LW(p) · GW(p)
Yes. What makes Watson exciting is that it can understand text well enough to prepare the text as inputs for expert systems; in medicine, for example, most expert systems needed an expert as I/O, and so they were of limited usefulness.
Watson is also backed by a huge corporation, which makes it easier to surmount obstacles like "but doctors don't like competition."
Replies from: peterward↑ comment by peterward · 2013-05-22T03:20:37.896Z · LW(p) · GW(p)
Watson is also backed by a huge corporation, which makes it easier to surmount obstacles like "but doctors don't like competition."
On the other hand being a huge corporation makes it harder to surmount "relying on marketing hype to inflate the value-added of the product."
At any rate, the company I work for relies heavily on Cognos and the metrics there seem pretty arbitrary--Hocus pocus to conjure simple numbers so directors can pretend they're making informed decisions and not operating on blind guesswork and vanity....And to rationalize firings, raise skimpings, additional bureaucracy and other unhappy decisions.*
*Come to think of it, "intelligence" or not, Cognos does emulate homo sapien psychology to a high degree of approximation.
comment by ForestHughes · 2013-05-22T22:01:28.362Z · LW(p) · GW(p)
Is anyone else interested in the possibility of Watson/Watson like systems acting as a "juror" on court cases? Does Watson have any known biases?
comment by wedrifid · 2013-05-22T02:09:50.643Z · LW(p) · GW(p)
Who says Watson is only a narrow AI?
From what I can tell, most people in the world who have considered the question, with the exception of shminux say that Watson is a narrow AI. For example googling "Watson narrow AI" produces many results describing it as a narrow AI while googling "Watson general AI" produces no results describing Watson as a GAI (and "Watson GAI" thinks I really mean "Watson Guy"). The wikipedia article describing what general artificial intelligence means also seems to exclude Watson as a candidate.
comment by nigerweiss · 2013-05-23T03:52:11.862Z · LW(p) · GW(p)
Watson is pretty clearly narrow AI, in the sense that if you called it General AI, you'd be wrong. There are simple cognitive tasks (like making a plan to solve a novel problem, modelling a new system, or even just playing Parcheesi) that it just can't do, at least, not without a human writing a bunch of new code to add a module that that does that new thing. It's not powerful in the way that a true GAI would be.
That said, Watson is a good deal less narrow than, say, for example, Deep Blue. Watson has a great deal of analytic depth in a reasonably broad domain (structured knowledge extraction from unformatted English) , which is a major leap forward. You might say that Watson is a rough analog to a language center connected to a memory system sitting in a box. It's not a GAI by itself, but it could be a substantial component of one down the line.
comment by Petruchio · 2013-05-22T13:18:36.400Z · LW(p) · GW(p)
Watson sounds cool, but this is a far step from General Artifical Intelligence. But how general is Watson as of now?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-22T15:09:25.404Z · LW(p) · GW(p)
Not very, but it's already better than humans in several unrelated areas and the list is getting longer. If some day it can behave more human than any single human, would it still be narrow?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2013-05-22T22:13:10.061Z · LW(p) · GW(p)
The links in the OP are just press releases about pilot projects and product announcements. It seems too early to say "it's already better than humans in several unrelated areas"?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-22T22:20:18.893Z · LW(p) · GW(p)
Probably. We'll have to wait for the production versions. But it does not appear to be just hype.