General intelligence test: no domains of stupidity

post by Stuart_Armstrong · 2013-05-21T16:04:59.806Z · LW · GW · Legacy · 36 comments

Contents

36 comments

It's been a productive conversation on my post criticising the Turing test. I claimed that I wouldn't take the Turing test as definitive evidence of general intelligence if the agent was specifically optimised on the test. I was challenged as to whether I had a different definition of thinking than "able to pass the Turing test". As a consequence of that exchange, I think I do.

Truly general intelligence is impossible, because of various "no free lunch" theorems, that demonstrate that no algorithm can perform well in every environment (intuitively, this makes sense: a smarter being could always design an environment that specifically penalises a particular algorithm). Nevertheless, we have the intuitive definition of a general intelligence as one that performs well in most (or almost all) environments.

I'd like to reverse that definition, and define a general intelligence as one that doesn't perform stupidly in a novel environment. A small change of emphasis, but it gets to the heart of what the Turing test is meant to do, and why I questioned it. The idea of the Turing test is to catch the (putative) AGI performing stupidly. Since we can't test the AGI on every environment, the idea is to have the Turing test be as general as possible in potential. If you give me the questions in advance, I can certainly craft an algorithm that aces that test; similarly, you can construct an AGI that would ace any given Turing test. But since the space of reasonable conversations is combinatorially huge, and since the judge could potentially pick any element from within that, the AGI could not just have a narrow list of responses: it would have to be genuinely generally intelligent, so that it would not end up being stupid on the particular conversation it was in.

That's the theory, anyway. But maybe the space of conversations isn't as vast as all that, especially if the AGI has some simple classification algorithms. Maybe the data on the internet today, combined with some reasonably cunning algorithms, can carry a conversation as well as a human. After all, we are generating examples of conversations by the millions every hour of every day.

Which is why I emphasised testing from outside the domain of competence of the AGI. You need to introduce it to a novel environment, and give it the possibility of being stupid. If the space of human conversations isn't large enough, you need to move to the much larger space of real-world problem solving - and pick something from it. It doesn't matter what it is, simply that you have the potential of picking anything. Hence only a general intelligence could be confident, in advance, of coping with it. That's why I emphasised not saying what your test was going to be, and changing the rules or outright cheating: the less restrictions you allow on the potential test, the more informative the actual test is.

A related question, of course, is whether humans are generally intelligent. Well, humans are stupid in a lot of domains. Human groups augmented by data and computing technology, and given enough time, are much more generally intelligent that individual humans. So general intelligence is a matter of degree, not a binary classification (though it might be nearly binary for some AGI designs). Thus whether you call humans generally intelligent is a matter of taste and emphasis.

36 comments

Comments sorted by top scores.

comment by OrphanWilde · 2013-05-21T17:29:17.600Z · LW(p) · GW(p)

Put a human being in an environment which is novel to them. Say, empiricism doesn't hold - the laws of this environment are such that "That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Is that human being going to behave "stupidly" in this environment? Do -we- fail the intelligence test? You acknowledge that we could - but if you're defining intelligence in such a way that nothing actually satisfies that definition, what the heck are you achieving, here?

I'm not sure your criteria is all that useful. (And I'm not even sure it's that well defined, actually.)

Replies from: magfrump, Stuart_Armstrong, jmmcd, MugaSofer
comment by magfrump · 2013-05-22T05:12:23.189Z · LW(p) · GW(p)

People fail at novel environments as mundane as needing to find a typo in an html file or paying attention to fact-checks during political debates. You don't have to come up with extreme philosophical counterexamples to find domains in which it's interesting to distinguish between the behavior of different non-experts (and such that these differences feel like "intelligence").

comment by Stuart_Armstrong · 2013-05-22T15:07:31.716Z · LW(p) · GW(p)

but if you're defining intelligence in such a way that nothing actually satisfies that definition, what the heck are you achieving, here?

The no free lunch theorems imply there's no such thing as a universal definition of general intelligence. So I think general intelligence should be a matter of degree, rather than kind.

I'm not sure your criteria is all that useful. (And I'm not even sure it's that well defined, actually.)

It's not well defined, yet. But I think there's a germ of a good idea there, that I'm teasing out with the help of commenters here.

comment by jmmcd · 2013-05-21T18:47:40.809Z · LW(p) · GW(p)

"That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Good point. In fact, that is the type of environment which is required for the No Free Lunch theorems mentioned in the post to even be relevant. A typical interpretation in the evolutionary computing field would be that it's the type of environment where an anti-GA (a genetic algorithm which selects individuals with worse fitness) does better than a GA. There are good reasons to say that such environments can't occur for important classes of problems typically tackled by EC. In the context of this post, I wonder whether such an environment is even physically realisable.

(I think a lot of people misinterpret NFL theorems.)

comment by MugaSofer · 2013-05-29T10:21:34.158Z · LW(p) · GW(p)

Say, empiricism doesn't hold - the laws of this environment are such that "That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Then we would observe this, and update on it - after all, this mysterious law is presumably immune to itself, or it would have stopped by now,right?

Replies from: OrphanWilde
comment by OrphanWilde · 2013-05-29T13:18:28.158Z · LW(p) · GW(p)

I'm curious to know how you expect Bayesian updates to work in a universe in which empiricism doesn't hold. (I'm not denying it's possible, I just can't figure out what information you could actually maintain about the universe.)

Replies from: MugaSofer, Creutzer
comment by MugaSofer · 2013-05-30T10:19:04.106Z · LW(p) · GW(p)

If things have always been less likely after they happened in the past, then, conditioning on that, something happening is Bayesian evidence that it wont happen again.

comment by Creutzer · 2013-05-29T20:25:51.662Z · LW(p) · GW(p)

What exactly do you mean by "empiricism does not hold"? Do you mean that there are no laws governing reality? Is that even a thinkable notion? I'm not sure. Or perhaps you mean that everything is probabilistically independent from everything else. Then no update would ever change the probability distribution of any variable except the one on whose value we update, but that is something we could notice. We just couldn't make any effective predictions on that basis - and we would know that.

comment by Kawoomba · 2013-05-21T16:31:42.306Z · LW(p) · GW(p)

One oft-neglected aspect of a Turing Test is the other guy - the human whom you need to distinguish from the machine. The usual tricks of the trade include e.g. asking a question such as "If you could pose a question to a turing test candidate, what would it be?", which supposedly confuses a non-AGI, but not a hew-mon.

However, have you ever asked your gramps such a question? A barely literate goatherder in Pakistan? And in writing, no less.

The same applies when extending the Turing Test to other problem domains. The goal isn't to tell apart Pandora from her box, but a plain ol' average "Is this thing on now, dear?" homo barely sapiens.

Replies from: ESRogs
comment by ESRogs · 2013-05-21T19:44:58.899Z · LW(p) · GW(p)

It sounds like you're pointing out that people often overestimate the difficulty of passing the Turing Test. Is that what you mean to say?

Do you think the Turing Test (at the difficulty level you describe) is a reasonable test of intelligence?

Replies from: Kawoomba
comment by Kawoomba · 2013-05-21T20:12:09.732Z · LW(p) · GW(p)

It sounds like you're pointing out that people often overestimate the difficulty of passing the Turing Test. Is that what you mean to say?

Yes. I think the Turing Test is useful, but that there are too many quite distinct tests mapping to "Turing Test", and details matter. College students as volunteers will lead to markedly different results than a randomly iid drawn human from anywhere on Earth.

As is the case so often, many disagreements I've seen boil down to (usually unrecognized) definitional squabbles. Without clarification, the statement "A Turing Test is a reasonable test for intelligence" just isn't well defined enough. Which Turing Test? Reasonable in terms of optimality, in terms of feasability, or in what way? Intelligence in some LW "optimizing power above certain threshold" sense (if so, what threshold?), or some other notion?

You thankfully narrowed it down to the specific Turing version I mentioned, but in truth I don't have only one concept of intelligence I find useful, in the sense of that I can see various concepts of intelligence being useful in different contexts. I pay no special homage to "intelligence1" over "intelligence2". Concerning this discussion:

I think that human-level intelligence - and the Turing Test is invariably centered on humans as the benchmark - shouldn't be defined by educated gifted people, but by an average. An "average human" Turing Test being passed is surely interesting, not least from a historical perspective. However, it's not clear whether such an algorithm would be powerful enough to foom, or to do that many theoretically interesting tasks. Many less privileged humans can't do that many interesting tasks better than machines, apart from recognizing tanks and cats on pictures.

So should we focus on a Turing Test tuned to an AGI on par with fooling the best researchers into believing it to be a fellow AI researcher? Maybe, although if we had a "winner", we'd probably know just by looking out the window, before we even set up the test (or we'd know by the AI looking in ...).

All considered, I'd focus on a Turing Test which can fool average humans in the civilized world, which seems to be the lowest Turing Test level at which such a chatbot would have a transformative influence on social human interactions.

Replies from: jmmcd, ESRogs
comment by jmmcd · 2013-05-21T21:01:01.495Z · LW(p) · GW(p)

Don't forget that the goal in the Turing Test is not to appear intelligent, but to appear human. If an interrogator asks "what question would you ask in the Turing test?", and the answer is "uh, I don't know", then that is perfectly consistent with the responder being human. A smart interrogator won't jump to a conclusion.

comment by ESRogs · 2013-05-21T22:05:35.616Z · LW(p) · GW(p)

Thank you, that was a fantastic answer to my questions (and more)!

comment by shminux · 2013-05-21T18:20:20.954Z · LW(p) · GW(p)

On the missing definition of "being stupid": I presume it means something like "failing to achieve a substantial portion of its explicit goals". In your example the goal is to convince the other side of being human-like. Is this a reasonable definition?

comment by PhilGoetz · 2013-05-24T18:12:10.420Z · LW(p) · GW(p)

I don't read the Turing test as being designed to prove "intelligence" at all. It's an assertion of materialism. People who oppose the Turing test say that the Turing test is useless because it only tests behavior, while what really makes a person a person is some magic inside them. John Searle says we can deduce via thought experiment that humans must have some "consciousness stuff" inside them, a type of matter that makes them conscious by its physical composition, that is conscious the way water is wet. At a lecture of his, I tried to get him to answer the question of what he would say if a computer passed the Turing test, but he dodged the question. To be consistent with his writings (which is not entirely possible, as they are not internally consistent) he would have to say that passing the Turing test means nothing at all, since his Chinese room posits a computer that passes the Turing test yet is "not intelligent".

Getting someone to accept the Turing test as a test of intelligence is a sneaky way of getting them to accept it as a test for consciousness / personhood. It's especially sneaky because failing the Turing test is not really their final objection to the claim that a computer can be conscious, but people will commit themselves to "Computers can't pass the Turing test" because they believe it. After computers pass the Turing test, it will be harder for people who made a big deal out of the Turing test to admit that they don't care whether a computer passed it or not, they still want to keep it as their slave.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-27T17:13:05.110Z · LW(p) · GW(p)

I agree with your point, but that's about the uses of the Turing test, not about how good it actually is.

comment by Vaniver · 2013-05-21T21:16:04.652Z · LW(p) · GW(p)

I don't think this captures the fundamental natures of intelligence, and I think others are right to throw an error at the word "stupidly."

Suppose there is some cognitive faculty, which we'll call adaptability, which agents have. When presented with a novel environment (i.e. sense data, set of possible actions, and consequences of those actions if taken), more adaptable agents will more rapidly choose actions with positive consequences.

Suppose there is some other cognitive faculty, which we'll call knowledge, which agents also have. This is a characterization of the breadth of environments to which they have adapted, and how well they have adapted to them.

Designing an agent with specific knowledge requires adaptability on the part of the designer; designing an agent with high adaptability requires adaptability on the part of the agent. Your general criticism seems to be "an agent can become knowledgeable about human conversations carried out over text channels with little adaptability of its own, and thus that is not a good test of adaptability."

I would agree: a GLUT written in stone, which is not adaptable at all, could still contain all the knowledge necessary to pass the Turing test. An adaptable algorithm could pass the Turing Test, but only after consuming a sample set containing thousands of conversations and millions of words and then participating in those conversations itself. After all, that's how I learned to speak English.

Perhaps there is an optimal learner that we can compare agents against. But communication has finite information transfer, and the bandwidth varies significantly; the quality of the instruction (or the match between the instruction and the learner) should be part of the test. Even exploration is an environment where knowledge can help, especially if the exploration is in a field linked to reality. (Indeed, it's not clear that humans are adaptable to anything, and so the binary "adaptable or not?" makes as much sense as an "intelligent or not?".)

These two faculties suggest different thresholds for AI: an AI can eat the jobs of knowledge workers once it has their knowledge, and an AGI can eat the job of creating knowledge workers once it has adaptability.

(Here I used two clusters of cognitive faculties, but I think the DIKW pyramid is also relevant.)

comment by Lumifer · 2013-05-21T16:53:40.924Z · LW(p) · GW(p)

define a general intelligence as one that doesn't perform stupidly in a novel environment

What does "stupidly" mean in this context?

It also seems to me you're setting up a very high bar, one that you yourself admit (individual) humans generally can't reach. If they can't, why set it so high, then? Since we can't get to even human-level intelligence at the moment, there doesn't seem to much sense in speculating about designing even harder tests for AIs.

comment by DanArmak · 2013-05-24T11:52:11.136Z · LW(p) · GW(p)

If the space of human conversations isn't large enough, you need to move to the much larger space of real-world problem solving

You can discuss real-world problem solving in a conversation. As such, doesn't the Turing test already encompass this?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-24T12:21:12.554Z · LW(p) · GW(p)

No, because there is a difference between describing an action as well as a human could, and performing that action as well as a human could. For many actions (eg 3d motion), the first is much easier than the second.

Replies from: DanArmak
comment by DanArmak · 2013-05-24T12:35:25.869Z · LW(p) · GW(p)

The AI can't perform human-like motions because it doesn't have a human-like body, but the test isn't supposed to penalize it for that. That's why the test is done through text-only chat and not in person.

If we limit ourselves to actions that can be remotely controlled via a text-like low-bandwidth interface, such actions can be described or simulated as part of the test. This doesn't include everything humans do, certainly, but neither can humans do everything an AI does. I think an AI that can pass such a test is for all intents and purposes a human-equivalent AGI.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-27T17:15:57.388Z · LW(p) · GW(p)

The AI can't perform human-like motions because it doesn't have a human-like body, but the test isn't supposed to penalize it for that. That's why the test is done through text-only chat and not in person.

Upload a video and have it identify the puppy. You don't need a body to do that.

Replies from: DanArmak
comment by DanArmak · 2013-05-27T17:18:45.791Z · LW(p) · GW(p)

Of course - that's what I meant. I was responding to your words,

For many actions (eg 3d motion), the first is much easier than the second.

And by "3d motion" I thought you meant the way humans can instinctively move their own bodies to throw or catch a ball, but can't explicitly solve the equations that define its flight.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-27T17:49:15.479Z · LW(p) · GW(p)

If a language-optimised AI could control manipulators well enough to catch balls, that would indeed be huge evidence of general intelligence (maybe send them a joystick with a usb port overide - the human grasps the joystick, the AI controls it electronically).

Replies from: DanArmak
comment by DanArmak · 2013-05-27T21:02:54.729Z · LW(p) · GW(p)

Given a 3d world model, predicting the ball's trajectory and finding an intercept point is very simple for a computer. The challenge is to turn sensory data into a suitable world model. I think there are already narrow AIs which can do this.

But this seems unrelated to speech production or recognition, or the other abilities needed to pass a classic Turing test. I think any AI that could pass a pure-language Turing test, could have such a narrow AI bolted on.

It seems likely to me (although I am not an expert or even a well informed layman) that almost any human-built AI design will have many modules dedicated to specific important tasks, including visual recognition and a 3d world model that can predict simple movement. It wouldn't actually solve such problems using its general intelligence (or its language modules) from first principles.

But again, this is just speculation on my part.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-27T21:19:43.924Z · LW(p) · GW(p)

I think any AI that could pass a pure-language Turing test, could have such a narrow AI bolted on.

That's precisely why the origin of the AI is so important - it's only if the general AI developed these skills without bolt-ons, that we can be sure it's a real general intelligence.

Replies from: DanArmak
comment by DanArmak · 2013-05-27T21:38:36.940Z · LW(p) · GW(p)

That's a sufficient condition, but I don't think it's a necessary one - it's not only then that we'll know it has real GI (general intelligence). For instance it might have had, or adapted, narrow modules for those particular purposes before its GI became powerful enough.

Also, human GI is barely powerful enough to write the algorithms for new modules like that. In some areas we still haven't succeeded; in others it took us hundreds of person-years of R&D. Humans are an example that with good enough narrow modules, the GI part doesn't have to be... well, superhumanly intelligent.

Replies from: Stuart_Armstrong, Eugine_Nier
comment by Stuart_Armstrong · 2013-05-28T12:23:35.272Z · LW(p) · GW(p)

Yes - my test criteria are unfair to the AI (arguably the Turing test is as well). I can't think of methods that have good specificity as well as sensitivity.

comment by Eugine_Nier · 2013-05-28T01:55:31.441Z · LW(p) · GW(p)

On the other hand, we're perfectly capable of acquiring skills that we didn't evolve to possess, e.g., flying planes.

Replies from: DanArmak
comment by DanArmak · 2013-05-28T07:10:09.935Z · LW(p) · GW(p)

We do have a general intelligence. Without it we'd be just smart chimps.

But in most fields where we have a dedicated module - visual recognition, spatial modeling, controlling our bodies, speech recognition and processing and creation - our GI couldn't begin to replace it. And we haven't been able to easily create equivalent algorithms (and the problems aren't just computing power).

comment by syllogism · 2013-05-22T05:48:56.497Z · LW(p) · GW(p)

Related thought: is a conversation bot crudely optimised to game the Turing test smarter than a labrador or a dolphin?

Replies from: AlexMennen
comment by AlexMennen · 2013-05-23T02:14:27.455Z · LW(p) · GW(p)

I don't think Alex the parrot could have passed the Turing test, but his use of language was rather impressive. Given that such a conversation bot would be an extreme specialist, parrot intelligence is much more general, and the bot would be only slightly better at its specialty than Alex the parrot is, I'd say it's highly unlikely that such a conversation bot would be smarter than the average parrot (which I assume to be only slightly less intelligent than Alex) by any reasonable measure of intelligence. Same goes for dolphins, and probably even for labradors.

comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-22T04:39:08.134Z · LW(p) · GW(p)

Perhaps the Turing test would work better if instead of having to pass for a human, the bot's insightfulness were rated and it had to be at the same level as a human's. Insightfulness seems harder to fake than "sounds superficially like a human" and it's what we care about anyway.

As a plus, it will make it easier for autistic people to pass the Turing test.

Replies from: HungryHobo, ChristianKl
comment by HungryHobo · 2013-05-22T12:55:28.765Z · LW(p) · GW(p)

How do they "fail"

If autistic people get classed as non-humans then that's a failure on the part of the assessing human beings and merely forms part of the baseline to which you are comparing the machines.

The humans are your control so that you can't set silly standards for the machines. The humans can't fail any more than the control rats in a drug trial can fail.

comment by ChristianKl · 2013-05-23T23:16:43.135Z · LW(p) · GW(p)

If you think insightfullness is a good way to test AI's then the person having the conversation with the AI can just say: "Hey, do you have an insight that you can share on X?"

Replies from: AspiringRationalist
comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-24T03:16:39.966Z · LW(p) · GW(p)

Yes, but in the standard Turing test, the AI is then judged on how human it seems, not how insightful.