The Futility of Intelligence
post by XiXiDu · 2012-03-15T14:25:24.062Z · LW · GW · Legacy · 33 commentsContents
33 comments
The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?
I name artificial intelligence or thinking machines - usually defined as the study of systems whose high-level behaviors arise from "thinking" or the interaction of many low-level elements. (R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”) Taken literally, that allows for infinitely many degrees of intelligence to fit every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a chess computer and saying "It's not a stone!" Does that feel like an explanation? No? Then neither should saying "It's a thinking machine!"
It's the noun "intelligence" that I protest, rather than to "evoke a dynamic state sequence from a machine by computing an algorithm". There's nothing wrong with saying "X computes algorithm Y", where Y is some specific, detailed flowchart that represents an algorithm or process. "Thinking about" is another legitimate phrase that means exactly the same thing: The machine is thinking about a problem, according to an specific algorithm. The machine is thinking about how to put elements of a list in a certain order, according to the a specific algorithm called quicksort.
Now suppose I should say that a problem is explained by "thinking" or that the order of elements in a list is the result of a "thinking machine", and claim that as my explanation.
The phrase "evoke a dynamic state sequence from a machine by computing an algorithm" is acceptable, just like "thinking about" or "is caused by" are acceptable, if the phrase precedes some specification to be judged on its own merits.
However, this is not the way "intelligence" is commonly used. "Intelligence" is commonly used as an explanation in its own right.
I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that its "advantage" is "intelligence"? You can make no new predictions. You do not know anything about the behavior of real-world artificial general intelligence that you did not know before. It feels like you believe a new fact, but you don't anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts - there's no detailed internal model to manipulate. Those who proffer the hypothesis of "intelligence" confess their ignorance of the internals, and take pride in it; they contrast the science of "artificial general intelligence" to other sciences merely mundane.
And even after the answer of "How? Intelligence!" is given, the practical realization is still a mystery and possesses the same sacred impenetrability it had at the start.
A fun exercise is to eliminate the explanation "intelligence" from any sentence in which it appears, and see if the sentence says anything different:
- Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology.
- After: The AI is going to take over the world by inventing nanotechnology.
- Before: A friendly AI is going to use its superhuman intelligence to extrapolate the coherent volition of humanity.
- After: A friendly AI is going to extrapolate the coherent volition of humanity.
- Even better: A friendly AI is a powerful algorithm. We can successfully extrapolate some aspects of the volition of individual humans using [FILL IN DETAILS] procedure, without any global societal variables, showing that we understand how the extrapolate the volition of humanity in theory and that it converges rather than diverges, that our wishes cohere rather than interfere.
Another fun exercise is to replace "intelligence" with "magic", the explanation that people had to use before the idea of an intelligence explosion was invented:
- Before: The AI is going to use its superior intelligence to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
- After: The AI is going to use magic to quickly evolve vastly superhuman capabilities and reach singleton status within a matter of weeks.
- Before: Superhuman intelligence is able to use the internet to gain physical manipulators and expand its computational capabilities.
- After: Superhuman magic is able to use the internet to gain physical manipulators and expand its computational capabilities.
Does not each statement convey exactly the same amount of knowledge about the phenomenon's behavior? Does not each hypothesis fit exactly the same set of outcomes?
"Intelligence" has become very popular, just as saying "magic" used to be very popular. "Intelligence" has the same deep appeal to human psychology, for the same reason. "Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Intelligence is popular because it is the junk food of curiosity. You can explain anything using intelligence , and so people do just that; for it feels so wonderful to explain things. Humans are still humans, even if they've taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors, dressed up in the literary genre of "science" but still the same species psychology.
33 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2012-03-15T18:10:44.293Z · LW(p) · GW(p)
Compare "intelligent" to "fast".
I say "The cheetah will win the race, because it is very fast."
This has some explanatory power. It can distinguish between various reasons a cheetah might win a race: maybe it had a head start, or its competitors weren't trying very hard, or it cheeted. Once we say "The cheetah won the race because it was fast" we know more than we did before.
The same is true of "General Lee won his battles because he was intelligent". It distinguishes the case in which he was a tactical genius from the case where he just had overwhelming numbers, or was very lucky, or had superior technology. So "intelligent" is totally meaningful here.
None of these are a lowest-level explanation. We can further explain a cheetah's fast-ness by talking about its limb musculature and its metabolism and so on, and we can further explain General Lee's intelligence by talking about the synaptic connections in his brain (probably). But we don't always need the lowest possible level of explanation; no one gets angry because we don't explain World War I by starting with "So, there were these electrons in Gavrilio Princip's brain that got acted upon by the electromagnetic force..."
A Mysterious Explanation isn't just any time you use a non-lowest-level explanatory word. It's when you explain something on one level by referring to something on the same level.
I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage.
Just as it is acceptable to say "General Lee won the battle because he was intelligent", so it is acceptable to say "The AI would conquer Rome because it was intelligent".
(just as it is acceptable to say "cavalry has an advantage over artillery because it is fast")
In fact, in the context of the quote, we were talking about the difference between a random modern human trying to take over Rome, and an AI trying to take over modern civilization. The modern human's advantage would be in technology and foreknowledge (as if General Lee won his battles by having plasma rifles and knowing all the North's moves in advance even though he wasn't that good a tactician); the AI might have those advantages, but also be more intelligent.
Replies from: WrongBot, JenniferRM, chaosmosis, XiXiDu↑ comment by JenniferRM · 2012-03-16T17:06:25.297Z · LW(p) · GW(p)
I think XiXiDu's arguments are an over-reach but I think they contain an element of truth that is functionally important and that you're not giving him credit for. The place in your response where the insight is lurking seems to me like this bit:
The same is true of "General Lee won his battles because he was intelligent". It distinguishes the case in which he was a tactical genius from the case where he just had overwhelming numbers, or was very lucky, or had superior technology.
I understand what I'd expect to see if a victory came from overwhelming numbers or superior technology. Overwhelming numbers would be easy to quantify and plug into a very coarse grained victory predicting battled simulation. Superior technology would be trickier, but could probably be accounted for in terms of higher rate of fire, faster supply lines, better communication, and more detailed simulation of these and similar technological elements.
However, I'd be hard pressed to quantify "tactical ability" or "luckiness" other than by pitting generals against each other over and over and seeing who won, kinda the way strength in the games of go and chess are assessed, except in the one case attributing the outcomes to brains and in the other to noise. Perhaps I could simulate everything really precisely, including their minds and give the one with more "tactical ability" more clock cycles and RAM? But it would be tricky. Some real world algorithms actually do better with fewer clock cycles, because they over fit the training data if you give them enough rope to hang themselves with.
Fiction generally communicates more vividly when the elements of the story shown to the reader rather than told to the reader. "His brow furrowed and his eyes flicked back and forth." vs "He wasn't good at hiding it when the surprise made him suspicious." Telling communicates a theory about what happened without bothering with the evidence; showing gives you the evidence. When writers of fiction try to show superintelligence, they run into a problem that can be solved in various ways, but to some degree it amounts to operationalizations of the hypothesis that a particular agent in the story is really smart. I've been collecting fiction and "non-fiction about the fiction" in this area for a while and one of them seems appropriate here.
Charlie Stross has written much about superintelligence, so he's spent a fair amount of time trying to figure out what you'd expect to observe, and one of his theories on this score has been floating around for a while:
On a related note, I once heard Stross talk about how to write superintelligences, and he gave an illustrative example, paraphrased: "When I need to take my cat to the vet, I bring out the cat carrier. The cat knows what this means, and then runs for the cat door. He is then very surprised to discover that it's closed and locked. To him, these two events are totally coincidental. I think that it's easy to write a story about a superintelligence: just have any humans that try to act against it constantly surprised by apparent coincidences that turn out to all have been the superintelligence's fault in the end."
Suppose I'm trying to thwart a superintelligence, as an exercise, and I end up tripping on a rock at the key moment. Afterwards, my expectations is that the superintelligence hasn't, for example, saturated my environment with rocks with hidden mechanisms that can shift on command, causing me to trip when it's observed to be required because that would be wasteful and dumb. Instead I expect it to have placed a handful of such rocks in just the right places, and to be able to explain the basis for placing the rocks in those particular locations, and I expect its explanation to make sense in retrospect. It might take me three months to read and ponder and verify the explanation, but then at the end I should say "yeah, that was all just common sense and keen observation, iterated over and over".
What I expect, in other words, is that there exists predictable structure and pattern in the world that I'm just not noticing right now, but that I hypothesize someone else could notice if they were integrating more information, using better modeling, at faster clock cycles, in a more goal directed fashion. The hypothesis of "effective superintelligence" thus pre-supposes that there's this "low hanging idea fruit" out of our human reach for really banal reasons like having 7 plus or minus 2 working memory registers (rather than 15), or living 70 years rather than 140 (and thus having to curtail learning and go to seed equivalently earlier in our life cycle).
Another way to explain the issue in more vivid terms, closer to near mode... Imagine that some general wins four seemingly even battles in a row and gives a lot of post hoc reasoning about how and why he won. "Yes," he concludes while being quite tall and quite male and buffing his nails against his chest, "I guess I'm just a genius. Maybe you should make me president given how awesome I am." But one in sixteen equivalently competent generals would appear the same. And post hoc explanations are disturbingly easy to accept. Maybe the general just got lucky? (The luck hypothesis being Fermi's side in his famous conversation with Groves.) Or maybe he actually is particularly skilled but in a domain limited way? How would you tell whether it was luck, or not? How would you tell if it was domain agnostic, or not? Especially, how would you tell without having domain expertise yourself?
Imagine Omega created a duplicate earth exactly like ours (geography, ecology, etc) except humans were genetically adjusted so they had 15 working memory registers and lived to 140, and they attended school till they were 45 as a casual matter so they had to-us-encyclopedic knowledge of "the universe in general and how it works". Suppose one of them is transported here at age 50 (looking like a 25 year old). After they got over the squalor and amorality and the fact that we all resented them for being better than us, they might take over the world. Sure... that could be. Or instead they might "flirt with leftist causes and be briefly in the news after being arrested for involvement in a socialist rally that turned into a riot. After that, nothing."*.
Can you imagine a super intelligence whose long term result is "after that, nothing"? If value of information calculations exhibit diminishing marginal utility in the general case, this is precisely what I would expect. The early swift and meaningful rise happens when truly valuable information is being acquired faster than normal, but once the prodigy had the basic gist of the world in their head they would be pretty similar to everyone else with the same core knowledge. And yet if I saw someone getting better than normal outcomes in some domain that I didn't understand very well "superior intelligence" would be a tempting hypothesis. "Luck" or "intelligence"? How do you tell the difference from a distance? I don't have an answer but I think that generic falsification criteria to answer this question in advance in a domain agnostic way would be very valuable.
Replies from: Yvain, TheOtherDave↑ comment by Scott Alexander (Yvain) · 2012-03-16T18:34:13.134Z · LW(p) · GW(p)
How would you tell whether it was luck, or not? How would you tell if it was domain agnostic, or not? Especially, how would you tell without having domain expertise yourself?
Given that all knowledge is probabilistic, it seems to me that I should believe there's a 93% chance he's skilled and a 7% chance he's lucky assuming equal prior probability. You could probably up your certainty a little by investigating whether other generals thought his tactics were brilliant or stupid, whether he has related skills like being good at chess and wargames, what his grades were at West Point, whether he conformed to military science's conventional wisdom as to the best tactics in a situation, and whether the narrative of the battles contain any obvious element of luck like "...and then a meteorite struck the enemy command center". Since luck is just probability, it shouldn't be too hard to calculate, and if that and intelligence are your only options, the opposite probability is intelligence.
That seems too simple an answer. Tell me what I'm missing.
Replies from: JenniferRM, ryjm↑ comment by JenniferRM · 2012-03-17T02:10:35.636Z · LW(p) · GW(p)
Upvoted and hopefully answered :-)
Specifically, I think you might be missing the halo effect, the fundamental attribution error, survivorship bias, and strategic signalling to gain access to power, influence, and money.
What is the nature of the property that the general would have a 93% chance of having? Is it a property you'd hypothesize was shared by about 7% of all humans in history? Is it shared by 7% of extant generals? What if the internal details of the property you hypothesize is being revealed are such that no general actually has it, even though some general always wins each battle? How would you distinguish between these outcomes? How many real full scale battles are necessary and how expensive are they to run to push P(at least one general has the trait) and P(a specific general has the trait|at least one general has the trait) close to 1 or 0?
XiXiDu titled his article "The Futility Of Intelligence". What I'm proposing is something more like "The Use And Abuse Of Appearances Of General Intelligence, And What Remains Of The Theory Of General Intelligence After Subtracting Out This Noise". I think that there is something left, but I suspect it isn't as magically powerful or generic as is sometimes assumed, especially around these parts. You have discussed similar themes in the past in less mechanistic and more personal, friendly, humanized, and generally better written forms :-)
This point is consonant with ryjm's sibling comment but if my suspicions stand then the implications are not simply "subtle and not incredibly useful" but have concrete personal implications (it suggests studying important domains is more important than studying abstractions about how to study, unless abstraction+domain is faster to acquire than the domain itself, and abstraction+abstraction+domain faces similar constraints (which is again not a particularly original insight)). The same suspicion has application to political discourse and dynamics where it suggests that claims of generic capacity are frequently false, except when precise mechanisms are spelled out, as with market pricing as a reasonably robust method for coordinating complex behaviors to achieve outcomes no individual could achieve on their own.
A roughly analogous issue comes up in the selection of "actively managed" investment funds. All of them charge something for their cognitive labor and some of them actually add value thereby, but a lot of it is just survivorship bias and investor gullibility. "Past performance is no guarantee of future results." Companies in that industry will regularly create new investment funds, run them for a while, and put the "funds that have survived with the best results so far" on their investment brochures while keeping their other investment funds in the background where stinkers can be quietly culled. Its a good trick for extracting rent from marks, but it's not the sort of thing that would be done if there was solid and simple evidence of a "real" alpha that investors could pay attention to as a useful and generic predictor of future success without knowing much about the context.
I have a strong suspicion, and I'd love this hunch to be proved wrong, that there's mostly no free lunches when it comes to epistemology. Being smart about one investment regime is not the same as being smart about another investment regime. Being a general and playing chess have relatively little cross-applicable knowledge. Being good at chess has relatively little in common with being good at the abstractions of game theory.
With this claim (which I'm not entirely sure of because its very abstract and hard to ground in observables) I'm not saying that AGI that implements something like "general learning ability in silicon and steel" wouldn't be amazing or socially transformative, I'm not saying that extreme rationality is worthless, its more like I'm claiming that its not magic, with a sub-claim that sometimes some people seem to speak (and act?) as though they think it might be magic. Like they can hand-wave the details because they've posited "being smarter" as an ontologically basic property rather than as a summary for having nailed down many details in a functional whole. If you adopt an implementation perspective, then the summary evaporates because the details are what remain before you to manipulate.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2012-03-17T11:57:41.577Z · LW(p) · GW(p)
So I'm interpreting your point as being "What if what we think of when we say 'general intelligence' isn't really all that useful in different domains, but we keep treating it as if it were the kind of thing that could constantly win battles or conquer Rome or whatever?" Perhaps then it was a mistake to talk about generals in battle, as your theory is that there may be an especially victorious general, but his fortune may be due more to some specific skill at tactics than his general intelligence?
I guess my belief in the utility of general intelligence (you cited an article of mine arguing against huge gains from technical rationality, which I consider very different; here I'm talking about pure IQ) would come from a comparison with subnormal intelligence. A dog would make a terrible general. To decreasing degrees, so too would a chimp, a five year old child, a person with Down's Syndrome, and most likely a healthy person with an IQ of 75. These animals and people would also, more likely than not, be terrible chess players, mathematicians, writers, politicians, and chefs.
This is true regardless of domain-specific training: you can read von Clausewitz's On War to a dog and it will just sit there, wagging its tail. You can read it to a person with IQ 75, and most of the more complicated concepts will be lost. Maybe reading On War would allow a person with a few dozen IQ point handicap to win, but it's not going to make a difference across a gulf the size of the one between dogs and humans.
Humans certainly didn't evolve a separate chess playing module, or a separate submarine tactics module, so we attribute our being able to wipe the floor with dogs and apes in chess or submarine warfare to some kind of "high general intelligence" we have and they don't.
So to me, belief in a general intelligence that could give AIs an advantage is just the antiprediction that the things that kept being true up until about IQ 100 still continue to be true after that bar. Just as we expect a human to be able to beat a dog at chess (even if we could get the dog to move pieces with its nose or something), and we would use the word "intelligence" to explain why, so I would expect Omega to be able to beat a human for the same reason.
Is that a little closer to the point of your objection?
Replies from: JenniferRM↑ comment by JenniferRM · 2012-03-18T04:10:01.681Z · LW(p) · GW(p)
First, I'd like to make sure that you understand I'm trying to explicate a hypothesis that seems to me like it could be true or false that seems to be considered "almost certainly false" in this community. I'm arguing for wider error bars on this subject, not a reversal of position, and also suggesting that a different set of conceptual tools (more focused on the world and less focused on "generic cognitive efficacy") are relevant.
Second: yes that is somewhat closer to the point of my objection and it also includes a wonderfully specific prediction which I suspect is false.
So to me, belief in a general intelligence that could give AIs an advantage is just the antiprediction that the things that kept being true up until about IQ 100 still continue to be true after that bar.
My current leading hypothesis here this that this is false in two ways, although one of those ways might be a contingent fact about the nature of the world at the present time.
Keep in mind that the studies that show IQ to be correlated with adaptive life outcomes (like income and longevity and so on) are mostly based on the middle of the curve. It appears to just be more helpful for people to have an IQ of 110 instead of 90 and there are lots of such people to run the stats to determine this. The upper edge is harder to study for lack of data but that's what we're trying to make inferences about. I suspect that either of us could be shown to be in error here by a good solid empirical investigation in the future.
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties "playing well with others" rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all "upgraded" to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
(For the record, so far as I can tell I'm not one of the super-brains... I just have sympathy for them, because the people I've met who are in this range seem to have hard lives. One of the things that makes their lives hard is that most people can't tell them apart from people like me who are dancing on the edge of this zone.)
The second reason high IQ may not be very useful is much deeper and follows on issues similar to the concept of the value of information. Simply put, "IQ" can be glossed as "the speed with which useful mindware and information can be acquired and deployed", and there may be diminishing returns in mindware just as their are diminishing returns in simpler information. Quoting Grady Towers quoting Hollingworth:
A second adjustment problem faced by all gifted persons is due to their uncommon versatility. Hollingworth says:
Another problem of development with reference to occupation grows out of the versatility of these children. So far from being one-sided in ability and interest, they are typically capable of so many different kinds of success that they may have difficulty in confining themselves to a reasonable number of enterprises. Some of them are lost to usefulness through spreading their available time and energy over such a wide array of projects that nothing can be finished or done perfectly. After all, time and space are limited for the gifted as for others, and the life-span is probably not much longer for them than for others. A choice must be made among the numerous possibilities, since modern life calls for specialization [3, p. 259].
In your comment you wrote:
Just as we expect a human to be able to beat a dog at chess (even if we could get the dog to move pieces with its nose or something), and we would use the word "intelligence" to explain why, so I would expect Omega to be able to beat a human for the same reason.
Chess is a beautiful example, because it is a full information deterministic zero sum game, which means there "exists" (ie there mathematically exists) a way for both sides to play perfectly. The final state of the game that results from perfect play is just a mathematical fact about which we are currently ignorant: it will either be a win for white, a win for black, or a tie. Checkers has been weakly solved and, with perfect play, it is a tie. If its ever fully solved then a person with an internet connection, some google-fu, and trivial system admin and software usage skills would be able to tie Omega. Its not a fact about my brain that I would be able to tie Omega that way, its a fact about checkers. That's just how checkers is. Perhaps they could even use Anki and some structured practice to internalize the checkers solution so that they could just tie Omega directly.
So what if a given occupation, or more broadly "dealing with reality in general" is similar to chess in this respect? What if reality admits of something like "perfect play" and perfect play turns out to not be all that complicated? A bit of tit-for-tat, some operations research, a 3D physics simulator for manual dexterity, and so on with various skills, but a finite list of basically prosaic knowledge and mindware. It is almost certain that a teachable version of such a strategy has not been developed and delivered to kids in modern public schools, and even a pedagogically optimized version of it might not fit in our heads without some way of augmenting our brains to a greater or lesser extent.
The fact that a bright person can master a profession swiftly enough to get bored and switch to some other profession may indicate that humans were not incredibly far from this state already.
I'm not saying there's nothing to IQ/intelligence/whatever. I'm just saying that it may be the case that the really interesting thing is "what optimal play looks like" and then you only need enough mindware loading and deploying ability to learn it and apply it. If this is the case, and everyone is obsessing over "learning and deployment speed", and we're not actually talking much about what optimal strategy looks like even though we don't have it nailed down yet, then that seems to me like it would be an important thing to be aware of. Like maybe really important.
And practically speaking, the answer seems like it might not be found by studying brains or algorithms. My tendency (and I might be off track here) is to look for the answer somewhere in the shape of the world itself. Does it admit of optimal play or not? Can we put bounds on a given strategy we actually have at hand to say that this strategy is X far away from optimal?
And more generally but more personally, my biggest fear for the singularity is that "world bots" (analogous to "chess bots") won't actually be that hard to develop, and they'll win against humans because we don't execute very well and we keep dying and having to re-learn the boring basics over and over every generation, and that will be that. No glorious mind children. No flowering of art and soulfulness as humans are eventually out competed by things of vastly greater spiritual and mental depth. Just unreflective algorithms grinding out a sort of "optimal buildout strategy" in a silent and mindless universe. Forever.
That's my current default vision for the singularity and its why I'm still hanging out on this website. If we can get something humanly better than that, even if it slows down the buildout, then that would be good. So far, this website seems like the place where I'd meet people who want to do that.
If someone knows of a better place for such work please PM me. I see XiXiDu as paying attention to the larger game as well... and getting down voted for it... and I find this a little bit distressing... and so I'm writing about it here in the hopes of either learning (or teaching) something useful :-)
Replies from: gwern↑ comment by gwern · 2012-06-25T22:10:45.535Z · LW(p) · GW(p)
Given that limitation, my current median expectation, based primarily on summaries of a reanalysis of the Terman Study, is that above about 135 for men (and 125 for women), high IQ tends to contingently lead to social dysfunction due to loneliness and greater potential for the development of misanthropy. Basically it seems to produce difficulties "playing well with others" rather than superior performance from within an integrated social network, simply because there are so many less intelligent people functioning as an isolating buffer, incapable of understanding things that seem obvious to the high IQ person. This is a contingent problem in the sense that if dumb people were all "upgraded" to equivalent levels of functioning then a lot of the problem would go away and you might then see people with an IQ of 160 not having these problems.
Which re-analysis was that? The material I am aware of show that income continues to increase with IQ as high as the scale goes, which certainly doesn't sound like dysfunction; eg "'The Effects of Education, Personality, and IQ on Earnings of High-Ability Men', Gensowski et al 2011" (similar to SMPY results). And from "Rethinking Giftedness and Gifted Education: A Proposed Direction Forward Based on Psychological Science", which is very germane to this discussion:
Replies from: JenniferRMSome subscribe to the ability-threshold/creativity hypothesis, which postulates that the likelihood of producing something creative increases with intelligence up to about an IQ of 120, beyond which further increments in IQ do not significantly augment one’s chances for creative accomplishment (Dai, 2010; Lubart, 2003). There are several research findings that refute the ability-threshold/creativity hypothesis. In a series of studies, Lubinski and colleagues (Park et al., 2007, 2008; Robertson et al., 2010; Wai et al., 2005) showed that creative accomplishments in academic (degrees obtained) vocational (careers) and scientific (patents) arenas are predicted by differences in ability. These researchers argue that previous studies have not found a relationship between cognitive ability and creative accomplishments for several reasons. First, measures of ability and outcome criteria did not have high enough ceilings to capture variation in the upper tail of the distribution; and second, the time frame was not long enough to detect indices of more matured talent, such as the acquisition of a patent (Park et al., 2007).
- Dai, D. Y. (2010). The nature and nurture of giftedness: A new framework for understanding gifted education. New York, NY: Teachers College Press.
- Lubart, T. I. (2003). In search of creative intelligence. In R.J. Sternberg, J. Lautrey, & T. I. Lubart (Eds.), Models of intelligence: International perspectives (pp. 279–292). Washington, DC: American Psychological Association
- Park, G., Lubinski, D., & Benbow, C. P. (2007). Contrasting intellectual patterns predict creativity in the arts and sciences: Tracking intellectually precocious youth over 25 years. Psychological Science, 18, 948–952. doi:10.1111/j.1467-9280.2007.02007.x
- Park, G., Lubinski, D., & Benbow, C. P. (2008). Ability differences among people who have commensurate degrees matter for scientific creativity. Psychological Science, 19, 957–961. doi:10.1111/j.1467-9280.2008.02182.x
- Robertson, K. F., Smeets, S., Lubinski, D., & Benbow, C. P. (2010). Beyond the threshold hypothesis: Even among the gifted and top math/science graduate students, cognitive abilities, vocational interests, and lifestyle preferences matter for career choice, performance, and persistence. Current Directions in Psychological Science, 19, 346–351. doi:10.1177/0963721410391442
- Wai, J., Lubinski, D., & Benbow, C. P. (2005). Creativity and occupational accomplishments among intellectually precocious youths: An age 13 to age 33 longitudinal study. Journal of Educational Psychology, 97, 484–492. doi:10.1037/0022-0663.97.3.484
↑ comment by JenniferRM · 2012-06-26T17:20:30.151Z · LW(p) · GW(p)
The re-analysis was by Grady Towers, with quoting and semi-philosophic speculation, as linked before. I suggested that increasing IQ might not be very useful, with the first human issue being a social contigency that your citations don't really seem to address because patents and money don't necessarily make people happy or socially integrated.
The links are cool and I appreciate them and they do push against the second (deeper) issue about possible diminishing marginal utility in mindware for optimizing within the actual world, but the point I was directly responding to was a mindset that produced almost-certainly-false predictions about chess outcomes. The reason I even brought up the social contingencies and human mindware angles is because I didn't want to "win an argument" on the chess point and have it be a cheap shot that doesn't mean anything in practice. I was trying to show directions that it would be reasonable to propagate the update if someone was really surprised by the chess result.
I didn't say humans are at the optimum, just that we're close enough to the optimum that we can give Omega a run for its money in toy domains, and we may be somewhat close to Omega in real world domains. Give it 30 to 300 years? Very smart people being better than smart people at patentable invention right now is roughly consistent with my broader claim. What I'm talking about is that very smart people aren't as dominating over merely smart people as you might expect if you model human intelligence as a generic-halo-of-winning-ness, rather than modeling human intelligence as a slightly larger and more flexible working memory and "cerebral" personal interests that lead to the steady accumulation of more and "better" culture.
↑ comment by ryjm · 2012-03-16T21:24:37.318Z · LW(p) · GW(p)
I think the subtlety here is that intelligence is used in place of domain specific aptitude, when much more information can be obtained through an enumeration of the latter. Given the sixteen equally competent generals with one who successfully wins four even battles in a row, saying "he won because he was intelligent" gives us limited information in that it does reveal the reason why he won to the majority of people who don't care for specifics, but does not tell us what this specific general did differently when compared to his equally skilled counterparts.
So using "intelligence" as an answer is not as mysterious as using "magic" or "complexity" in a general context, but in domain specific areas it relays little value - in such a situation, I would think that all participants would ask for some sort of clarification (specific tactics, key responses, etc). It most likely is intelligence that gave him the win though, even if we aren't about to go into specifics; but perhaps we are missing vital knowledge by saying "intelligence!" and ending.
However, I think this is a subtle and not incredibly useful point when applied in general.
↑ comment by TheOtherDave · 2012-03-16T17:16:35.282Z · LW(p) · GW(p)
"Luck" or "intelligence"? How do you tell the difference from a distance?
Recording predictions and comparing them to actual events helps.
It's not guaranteed -- it's still possible that my predictions came true by sheer coincidence -- but the amount of luck required to fulfill detailed predictions is significantly higher than the amount of luck required to support compelling post-hoc explanations.
Of course, this presumes that the putatively intelligent system is attempting to demonstrate it's intelligence. If they want me to think it's luck, that's a much harder problem.
↑ comment by chaosmosis · 2012-05-01T03:16:10.099Z · LW(p) · GW(p)
+Karma. The "cheeted" pun would have earned it an upvote even if this wasn't as useful as it is.
↑ comment by XiXiDu · 2012-03-17T12:51:53.339Z · LW(p) · GW(p)
Yvain, the whole point of the above post was to show that you can't estimate the probability and magnitude of the advantage an AI will have if you are using something that is as vague as 'intelligence'.
Just as it is acceptable to say "General Lee won the battle because he was intelligent", so it is acceptable to say "The AI would conquer Rome because it was intelligent".
No, it is not. Because it is seldom intelligence that makes people win. Very high IQ is not correlated with might and power. Neither does evolution favor intelligence.
Replies from: Yvain, ArisKatsaris↑ comment by Scott Alexander (Yvain) · 2012-03-17T13:15:35.251Z · LW(p) · GW(p)
Neither does evolution favor intelligence.
Evolution doesn't universally favor any one trait (see the Sequences). It favors whatever is most useful in a particular situation. That having been said, intelligence has proven pretty useful so far; humans seem more evolutionarily successful than chimps, and I'd drag in intelligence rather than bipedalism or hairlessness or whatever to explain that.
More importantly, when you say high IQ isn't correlated with might and power, I think you're thinking of miniscule, silly differences like the difference between the village idiot and Albert Einstein (see the Sequences). Let's think more "human vs. dog". In a battle between two armies, the one led by a human and the other by a dog, the human will win every time. Given enough time to plan and enough access to the products of other humans, a human can win any conceivable contest against a dog, even if that dog has equal time to plan and equal access the products of other dogs. Very high IQ is totally correlated with might and power when "very high IQ" means "IQ of at least five digits". (see the Sequences )
Replies from: XiXiDu↑ comment by XiXiDu · 2012-03-17T13:51:09.137Z · LW(p) · GW(p)
Let's think more "human vs. dog".
But I do not accept that argument. If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the "human vs. dog" comparison.
Very high IQ is totally correlated with might and power when "very high IQ" means "IQ of at least five digits". (see the Sequences )
The Sequences conclusively show that there does exist anything like a five digits IQ?
And even then. I think you are vastly underestimating how fragile an AI is going to be and how much the outcome of a battle is due to raw power and luck. I further think that you overestimate what even a speculative five digit IQ being could do without a lot of help and luck.
Replies from: DanArmak↑ comment by DanArmak · 2012-03-17T14:46:08.210Z · LW(p) · GW(p)
If humans are Turing complete then AI might be faster and less biased but not anything that would resemble the "human vs. dog" comparison.
Humans definitely are Turing complete: we can simulate Turing machines precisely in our heads, with pen-and-paper, and with computers. (Hence people can dispute whether the human Alan Turing specified TMs to be usable by humans or whether TMs have some universally meaningful status due to laws of physics or mathematics.)
So in a sense the AI can "only" be faster. This is still very powerful if it's, say, 10^9 times as fast as a human. Game-changingly powerful. A single AI could think, serially, all the thoughts and new ideas if would take all of humanity to think in parallel.
But an AI can also run much better algorithms. It doesn't matter that we're Turing-complete or how fast we are, if the UTM algorithm we humans are actually executing is hard-wired to revolve around social competition and relationships with other humans! In a contest of e.g. scientific thought, it's pretty clear that there exist algorithms that are much better qualitatively than the output of human research communities.
That's without getting into recursive self-improvement territory. An AI would be much better than humans simply by the power of being immune to boredom, sleep, akrasia, known biases, ability to instantaneously self-modify to eliminate point bugs (and self-debug in the first place), unlimited working memory and storage memory size (as compared to humans), direct neural (in human terms) access to Internet and all existing relevant databases of knowledge, probably an ability to write dedicated (conventional) software that's as fast and efficient as our sensory modalities (humans are pretty bad at general-purpose programming because we use general-purpose consciousness to do it), ability to fully update behavior on new knowledge, ability to directly integrate new knowledge and other AIs' output into self, etc. etc.
You say an AI might be "less biased" than humans off-handedly, but that too is a Big Difference. Imagine all humans at some point in history are magically rid of all biases known to us today, and gain an understanding and acceptance of everything we know today about rationality and thought. How fast would it take those humans to overtake us technologically? I'd guess no more than a few centuries, no matter where you started (after the shift to agriculture).
To sum up, the difference between humans and a sufficiently good AI wouldn't be the same as that between humans and a dog, or even of the same type. It's a misleading comparison and maybe that's one reason why you reject it. It would, however, lead to definite outright AI victory in many contests, due to the AI's behavior (rather than its external resources etc). And that generalization is what we name "greater intelligence".
Replies from: wedrifid↑ comment by wedrifid · 2012-03-18T01:56:59.084Z · LW(p) · GW(p)
So in a sense the AI can "only" be faster.
And more reliable. Humans can't simulate a Turing Machine beyond a certain level complexity without making mistakes. We will eventually misplace a rock.
↑ comment by ArisKatsaris · 2012-03-17T13:34:20.435Z · LW(p) · GW(p)
Very high IQ is not correlated with might and power
Am reasonably sure it is correlated. For example I'd wager that the average IQ of the world's 100 most powerful people (no matter how you choose to judge that: fame, political power, financial power, military power, whatever), is significantly higher than the average IQ of the world's 100 weakest people (same criteria as above) .
comment by DanielLC · 2012-03-15T16:42:48.824Z · LW(p) · GW(p)
I have lost track of how many times I have heard people say, "an artificial general intelligence would have a genuine intelligence advantage" as if that explained its advantage.
It explains the nature of its advantage. The advantage is not a superior end effector, but superior ability in figuring out what to do with the end effector.
That does not tell you how it works. We don't know how it works. If we did, we'd have already built AI.
comment by endoself · 2012-03-15T20:01:39.393Z · LW(p) · GW(p)
R. J. Sternberg quoted in a paper by Shane Legg: “Viewed narrowly, there seem to be almost as many definitions of intelligence as there were experts asked to define it.”
Did you read the conclusion of that paper, where Legg and Hutter mentioned another of their papers that proposes a formal definition of intelligence? Their definition isn't perfect and it is easy to sneak in connotations, but that is far better than a definition that makes no predictions.
comment by Viliam_Bur · 2012-03-15T14:43:07.136Z · LW(p) · GW(p)
Seems to me that part of our concept of "intelligence" is the ability to optimize in many different domains, and another part is antropomorphism -- because so far humans are the only example known to be able to optimize in many different domains. Now how do we separate these two parts? Which parts of "what humans do" can be removed while preserving the ability to optimize?
If a machine is able to optimize in many different domains, and if this includes human language and psychology, than the machine should be able to understand what humans ask, and then give them the answers that increase their utility (even correcting for possible human misunderstanding and biases). Seems to me that most people, after talking with such machine, would agree that it is intelligent -- by definition it should give them satisfying (not necessarily correct) answers, including answers to questions like "Why?".
So I think if something is a good cross-domain optimizer, and is able to communicate with humans, humans will consider it intelligent. The opposite direction is a problem, that people may assume something is a necessary part of intelligence (and must be solved when building an AI), despite it is not necessary. In other words, in black-box testing people will report the AI as intelligent, but some of their ideas about intelligence may be superfluous when constructing such AI.
EDIT: Less seriously, your analogy with "magic" works here too -- if people will get very satisfying answers to problems they were not able to solve, many of them will consider the machine magical too; and yet they will have bad ideas about its construction.
comment by dvasya · 2012-03-16T22:05:28.543Z · LW(p) · GW(p)
Before: The AI is going to take over the world by using its superhuman intelligence to invent nanotechnology. After: The AI is going to take over the world by inventing nanotechnology.
This trick works only because you already know how exactly the AI going to take over the world, so it doesn't even need to switch its intelligence on. If, however, you change this for
Before: The AI is going to take over the world by using its superhuman intelligence to do something totally unforeseeable by us meatbags.
After: The AI is going to take over the world by doing something totally unforeseeable by us meatbags.
then there seems to be important distinction. There are lots of things that we can't foresee, but only some of those would lead the AI to take over the world, and finding this subset would require something of the AI, and we need a name for that something.
Back to the original point, I am quite fond of a metaphor by Marvin Minsky, where he compares intelligence to the concept of "the unexplored regions of Africa," which makes perfect sense, even though they disappear as soon as we discover them.
comment by thelittledoctor · 2012-03-15T15:36:45.031Z · LW(p) · GW(p)
Anthropomorphizing computer programs is a time-honoured tradition, which may explain some of the usage in a less worrisome way.
Typically I think of intelligence as a generalized ability to achieve goals (much as in Villiam_Bur's comment).
Replies from: lukstafi↑ comment by lukstafi · 2012-03-15T17:37:36.486Z · LW(p) · GW(p)
While I was trying to find a quote, roughly saying "I know my students begin to understand object-oriented programming when they start to anthropomorphize their objects", I could only find Dijkstra's bashing of OO and anthropomorphization: http://lambda-the-ultimate.org/node/264.
Replies from: lukstafi↑ comment by lukstafi · 2013-05-17T11:37:19.573Z · LW(p) · GW(p)
"As a teacher of object-oriented programming, I know that I have succeeded when students anthropomorphise their objects, that is, when they turn to their partners and speak of one object asking another object to do something. I have found that this happens more often, and more quickly, when I teach with Smalltalk than when I teach with Java: Smalltalk programmers tend to talk about objects, while Java programmers tend to talk about classes." Object-oriented programming: some history, and challenges for the next fifty years by Andrew P. Black
comment by Richard_Kennaway · 2012-03-15T15:04:21.940Z · LW(p) · GW(p)
"Intelligence" is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship.
The word "intelligence", properly used, is not an explanation. It is a name for a phenomenon that at present we have no explanation for. And because we have no explanation for it, we can define it only by describing what it looks like. None of the various descriptions that people have come up with are explanations, and we cannot even have confidence that any of them draws a line that corresponds to a real division in the world.
As an analogy, how could someone answer the question "what is iron ore?" in pre-modern times? Only by saying that it's a sort of rock looking thus-and-so from which iron can be obtained by a certain process. Saying "this is iron ore" is not an explanation of the fact that you can get iron from it, it is simply a statement of that fact.
Saying, "this hypothetical creature is an artificial superintelligence" is merely to say that one has imagined something that does intelligence supremely better than we do. To say that it would therefore have an advantage in taking over ancient Rome is to say that the skill of intelligence would be useful for that purpose, and the more the better, whereas the ability to lift heavy things, leap tall buildings, or see in the dark would be of at most minor relevance to the task.
Replies from: Incorrect↑ comment by Incorrect · 2012-03-15T15:09:37.613Z · LW(p) · GW(p)
It is a name for a phenomenon that at present we have no explanation for.
Approximation of Solomonoff induction?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-03-15T15:17:16.845Z · LW(p) · GW(p)
That's just another description of the results that intelligence obtains. By contrast, this explains why you can get iron from the rocks you can get it from.
Replies from: Incorrect↑ comment by Incorrect · 2012-03-15T15:44:18.769Z · LW(p) · GW(p)
Well, you can design explicit approximations of Solomonoff induction like AIXI, they're just intractable.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-03-15T16:37:55.093Z · LW(p) · GW(p)
they're just intractable
I am not persuaded that AIXI is a step towards AGI. When I look at the field of AGI it is as if I am looking at complexity theory at a stage when the concept of NP-completeness had not been worked out.
Imagine an alternate history of complexity theory, in which at some stage we knew of a handful of problems that seemed really hard to solve efficiently, but an efficient solution to one would solve them all. If someone then discovered a new problem that turned out to be equivalent to these known ones, it might be greeted as offering a new approach to finding an efficient solution -- solve this problem, and all of those others will be solved.
But we know that wouldn't have worked. When a new problem is proved NP-complete, that doesn't give us a new way to find efficient solutions to NP-complete problems. It just gives us a new example of a hard problem.
Look at all the approaches to AGI that have been proposed. Logic was mechanised, and people said, "Now we can make an intelligent machine." That didn't pan out. "Good enough heuristics will be intelligent!" "A huge pile of 'common sense' knowledge and a logic engine will be intelligent!" "Really good compression would be equivalent to AGI!" "Solomonoff induction is equivalent to AGI!"
So nowadays, when someone says "solve this new problem and it will be an AGI!" I take that to be a proof that the new problem is just as hard as the old ones, and that no new understanding has been gained about how to make an AGI.
The analogy with complexity theory breaks down in one important way: there are reasons to think that P != NP is not merely a mathematical, but a physical law (I don't have an exact reference, but Scott Aaronson has said this somewhere), but we already have an existence proof for human-level intelligence: us. So we know there is a solution that we just haven't found.
comment by TheOtherDave · 2012-03-15T22:54:11.368Z · LW(p) · GW(p)
Yeah; I've become fonder of using "powerful optimizer" in place of "superintelligence" for reasons of specificity... especially since I can imagine cases of the former that I wouldn't classify as the latter, to which much of what is said about the latter would still apply.