FAI FAQ draft: general intelligence and greater-than-human intelligence
post by lukeprog · 2011-11-23T19:52:32.175Z · LW · GW · Legacy · 11 commentsContents
1.10. What is general intelligence? 1.11. What is greater-than-human intelligence? None 11 comments
My thanks to everyone who has provided feedback on these drafts so far. It's been helpful, and I've been incorporating your suggestions into the document. Now, Iinvite your feedback on these two snippets from the forthcoming Friendly AI FAQ. For references, see here.
_____
1.10. What is general intelligence?
There are many competing definitions and theories of intelligence (Davidson & Kemp 2011; Niu & Brass 2011; Legg & Hutter 2007), and the term has seen its share of emotionally-laden controversy (Halpern et al. 2011; Daley & Onwuegbuzie 2011).
Legg (2008) collects dozens of definitions of intelligence, and finds that they loosely converge on the following idea:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
That will be our ‘working definition’ for intelligence in this FAQ.
There is a sense in which famous computers like Deep Blue and Watson are “intelligent.” They can out-perform human competitors for a narrow range of goals (winning chess games or answers Jeopardy! questions), in a narrow range of environments. But drop them in a novel environment — a shallow pond or a New York taxicab — and they are dumb and helpless. In this sense their “intelligence” is not general.
Human intelligence is general in that it allows us to achieve goals in a wide range of environments. We can solve new problems of survival, competition, and fun in a wide range of environments, including ones never before encountered. That is, after all, how humans came to dominate all the land and air on Earth, and what empowers us to explore more extreme environments — like the deep sea or outer space — when we choose to. Humans have invented languages, developed agriculture, domesticated other animals, created crafts and arts and architecture, written philosophy, explored the planet, discovered math and science, evolved new political and economic systems, built machines, developed medicine, and made plans for the distant future.
Some other animals also have a slower but more general intelligence than Deep Blue and Watson. Apes, dolphins, elephants, and a few species of bird have demonstrated some ability to solve novel problems in novel environments (Zentall 2011).
General intelligence in a machine is called artificial general intelligence (AGI). Nobody has developed AGI yet, though many approaches are being attempted. Goertzel & Pennachin (2007) provides an overview of approaches to AGI.
1.11. What is greater-than-human intelligence?
Humans gained dominance over Earth not because we had superior strength, speed, or durability, but because we had superior intelligence. It is our intelligence that makes us powerful. It is our intelligence that allows us to adapt to new environments. It is our intelligence that allows us to subdue animals or invent machines that surpass us in strength, speed, durability and other qualities.
Humans do not operate at anywhere near the upper physical limit of general intelligence. Instead, humans are nearly the dumbest possible creature capable of developing a technological civilization. But our intelligence is still running on a mess of evolved mammalian modules built of meat. Our neurons communicate much slower than electric circuits. Our thinking is hobbled by comprehensive and deep-seated cognitive biases (Gilovich et al. 2002).
It is easy to create machines that surpass our cognitive abilities in narrow domains (chess, etc.), and easy to imagine the creation of machines that eventually surpass our cognitive abilities in a general way. A greater-than-human machine intelligence would exhibit over us the kind of superiority we exhibit over our ancestors in the genus Homo, or chimpanzees, or dogs, or even snails.
Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant:
To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain.
As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on... [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world.
Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:
- The emulation argument (see section 7.3)
- The evolutionary argument (see section 7.4)
He also advances an argument for the conclusion that upon reaching human-level general intelligence, machines can be improved to reach greater-than-human intelligence: the extensibility argument (see section 7.5).
We can also get a sense of how human cognition might be surpassed by examining the limits of human cognition. These include:
- Small scale. The human brain contains 85-100 billion neurons (Azevedo et al. 2009; Williams & Herrup 1988), but a computer need not be so limited. Legg (2008) writes:
...a typical adult human brain weights about 1.4 kg and consumes just 25 watts of power (Kandel et al. 2000). This is ideal for a mobile intelligence, however an artificial intelligence need not be mobile and thus could be orders of magnitude larger and more energy intensive. At present a large supercomputer can fill a room twice the size of a basketball court and consume 10 megawatts of power. With a few billion dollars much larger machines could be built.
With greater scale, a computer could far surpass human capacities for short-term memory, long-term memory, processing speed, and much more.
- Slow speed. Again, here is Legg (2008):
...brains use fairly large and slow components. Consider one of the simpler of these, axons... These are typically around 1 micrometre wide, carry spike signals at up to 75 metres per second at a frequency of at most a few hundred hertz (Kandel et al. 2000). Compare these characteristics with those of a wire that carries signals on a microchip. Currently these are 45 nanometres wide, propagate signals at 300 million metres per second and can easily operate at 4 billion hertz... Given that present day technology produces wires which are 20 times thinner, propagate signals 4 million times faster and operate at 20 million times the frequency, it is hard to believe that the performance of axons could not be improved by at least a few orders of magnitude.
- Poor algorithms. The brain’s algorithms for making calculations are often highly inefficient. A cheap calculator beats the most impressive savant in mental calculation.
- Proneness to distraction. Our brains are highly prone to distraction, loss of focus, and boredom. A machine intelligence need not suffer these deficiencies.
- Slow Learning speed. Humans gain new skills and learn new material slowly, but a machine may be able to acquire new skills and knowledge at a rate more comparable to that of Neo in The Matrix (“I know kung-fu”).
- Limited communication abilities. Human tools for communication (the vibration of vocal chords, the movement of limbs, written words) are imprecise and noisy. Computers already communicate with each other much more quickly and accurately by using unambiguous languages (protocols) and direct electrical signaling.
- Limited self-reflection. Only in the past few decades have humans been able to look inside the “black box” that produces their feelings, judgments, and behavior — and even still, most of how our brains work is a mystery. Because of this, we must often infer (and sometimes be mistaken about) our own desires and judgments, and perhaps even our own subjective experiences. In contrast, a machine could be made to have access to its own source code, and thereby know everything about its own operation and how to improve itself.
- Non-extensibility. Humans cannot easily integrate with hardware or with other human minds. Machines could quickly gain the benefits of being able to integrate with a variety of hardware and substrates.
- Limited sensory data. Humans have limited senses, and there are many more that could be had: ultraviolet vision (like bees have), infrared vision (like snakes), telescopic vision (like eagles), microscopic vision, infrasound hearing, ultrasound hearing, advanced chemical diagnosis (more sophisticated than the human tongue), super-smell, spectroscopy, and more.
- Cognitive biases. Due to the haphazard evolutionary construction of the human mind (Marcus 2008), humans are subject to a long list of cognitive biases that distort our thinking (Gilovich et al. 2002; Stanovich 2010). This need not be the case in machines.
Thus, it seems that greater-than-human intelligence is possible for a long list of reasons.
11 comments
Comments sorted by top scores.
comment by lessdazed · 2011-11-24T05:21:16.918Z · LW(p) · GW(p)
Intelligence measures an agent’s ability to achieve goals in a wide range of environments...
Human intelligence is general in that it allows us to achieve goals in a wide range of environments. We can solve new problems of survival, competition, and fun in a wide range of environments
Too many uses of "wide range of environments."
Humans have invented languages...explored the planet...evolved new political and economic systems...
The origin of language is contentious and the range of opinions include some that have it occurring almost as naturally as a frog's hop. Better to leave it out.
Exploration is even less impressive. Rats are arguably more curious and have explored more places.
Political and economic systems, particularly non-failed ones, weren't planned and aren't even very well understood.
approaches are being attempted
"Tried" is much more common as a verb that steps away from the path metaphor. Other common verbs here, like "taken," fit it. "Attempted" just seems jarring to me.
Some other animals also have a slower but more general intelligence than Deep Blue and Watson.
Speed is only one very important difference between narrow AIs and weak NGIs, quality of the best solution found is another.
"Some other animals also have more general intelligence than Deep Blue and Watson, though their solutions to problems are much further from optimal and they reach them more slowly than specialist narrow AIs."
Instead, humans are nearly the dumbest possible creature capable of developing a technological civilization.
"We aren't able to integrate animals somewhat less intelligent than us, such as our chimpanzee relatives, into technological civilization. Considering the enormous room there is for improvement on human intelligence, an interesting perspective is to think of ourselves as among the dumbest possible creatures capable of developing a technological civilization."
Or take that line out.
But our intelligence is still running on a mess of evolved mammalian modules built of meat.
"Our intelligence is still running on a mess of evolved mammalian modules built of meat, not evolved simply to maximize intelligence but to use few resources and solve problems found in the early evolutionary environment. Most (?) of the brain modules we use for general intelligence didn't originally evolve to specialize at it, and are instead optimized for other tasks."
But Chalmers (2010) points out that their arguments are irrelevant:
A bit strong. "Some contend that...But Chalmers (2010) argues that their objections are irrelevant:" That's logically weaker, but maybe more manipulative in a dark arts sense, if it's not legitimate to frame the thesis "AGI is possible" as one to be assumed unless a compelling objection is made.
communicate much slower
"more slowly"
and thereby know everything about its own operation and how to improve itself.
The problem here is that it isn't grammatically clear that "everything" does not also apply to "how to improve itself."
Limited sensory data
Add bats.
Mention somewhere: http://en.wikipedia.org/wiki/Moravec%27s_paradox
comment by Lapsed_Lurker · 2011-11-24T21:50:46.377Z · LW(p) · GW(p)
Couple of things in the quoted text chunks:
Compare these characteristics with those of a wire that carries signals on a microchip. Currently these are 45 nanometres wide, propagate signals at 300 million metres per second ...
Do signals on a microchip really travel at nearly the speed of light? I'd had the impression it was about half that, and some googling and looking at Wikipedia found that it varies a lot in big wires like network cable, but I didn't find anything definitive about signal speed on microchips.
...typical adult human brain weights about...
Should be 'weighs'
comment by spuckblase · 2011-11-24T15:03:25.291Z · LW(p) · GW(p)
Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant: To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain. As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on... [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world. Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:
- The emulation argument (see section 7.3)
- The evolutionary argument (see section 7.4)
This whole paragraph doesn't seem to belong to section 1.11.
comment by amcknight · 2011-11-23T22:48:54.575Z · LW(p) · GW(p)
humans are nearly the dumbest possible creature capable of developing a technological civilization
Dumbest possible? That sounds too strong. Humans are the first, which gives us good reason to think we are nowhere near the most intelligent, but we might not be at the very bottom. I think the very bottom would not be easy to evolve because it doesn't offer much of an advantage.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-11-24T00:31:24.387Z · LW(p) · GW(p)
Chimpanzees have some level of intelligence insufficient for technological civilization. Our common ancestor with chimpanzees presumably had some level of intelligence insufficient for technological civilization. As our ancestors evolved gradually from that common ancestor, their intelligence increased gradually. As soon as it reached the level sufficient for technological civilization, they formed one, which has existed for the blink of an eye in evolutionary time. Current humans are just above that threshold, because we can only have been above it for a short time.
comment by Shmi (shminux) · 2011-11-23T22:36:24.571Z · LW(p) · GW(p)
I am not sold on your definition of intelligence:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
That will be our ‘working definition’ for intelligence in this FAQ.
Does this mean that viruses and cockroaches are more intelligent than humans? They can certainly achieve their goals (feeding and multiplying) in a "wide range of environments", much wider than humans. Well, maybe not in space.
I suspect that there should be a better definition. Wikipedia mentions abstract thought and other intangibles, but concedes that there is little agreement: " Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions."
The standard cop out "I know intelligence when I see it" is not very helpful, either.
I understand the need to have a discussion of AGI in the FAI FAQ, but I am skeptical that a critically minded person would settle for the definition you have given. Something general, measurable and not confused with a bacterial infection would be a good target.
Replies from: amcknight↑ comment by amcknight · 2011-11-23T22:45:10.887Z · LW(p) · GW(p)
Here's an easy fix:
Intelligence measures an agent's ability to achieve a wide range of goals in a wide range of environments.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-11-23T22:57:32.260Z · LW(p) · GW(p)
Intelligence measures an agent's ability to achieve a wide range of goals in a wide range of environments.
One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it's more about being able to solve unexpected subproblems.
Replies from: amcknight, shokwave, RomeoStevens↑ comment by RomeoStevens · 2011-11-24T01:57:39.373Z · LW(p) · GW(p)
perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.
This doesn't give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can't seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.