The Impossibility of the Intelligence Explosion

post by DragonGod · 2017-11-30T05:47:43.879Z · LW · GW · 10 comments

I don't agree with everything here—or even the central argument—but this post was informative for me, and I think others would benefit from it.

Things I learned.

  1. No free lunch theorem. I'm very grateful for this, and it made me start learning more about optimisation.
  2. From the above: there is no general intelligence. I had previously believed that finding a God's algorithm for optimisation would be one of the most significant achievements of the century. I discovered that's impossible.
  3. Exponential growth does not imply exponential progress as exponential growth may meet exponential bottlenecks. This was also something I didn't appreciate. Upgrading from a level n intelligence to a level n+1 intelligence may require more relative intelligence than upgrading from a level n-1 to a level n. Exponential bottlenecks may result in diminishing marginal growth of intelligence.

The article may have seemed of significant pedagogical value to me, because I hadn't met these ideas before. For example, I have just started reading the Yudkowsky-Hanson AI foom debate.

10 comments

Comments sorted by top scores.

comment by Viliam · 2017-12-04T18:28:09.702Z · LW(p) · GW(p)

This feels like precisely the type of wrong but clever thinking that LW is teaching to avoid.

A brain is just a piece of biological tissue, there is nothing intrinsically intelligent about it.

Assuming the author is serious about this sentence, this would be the right moment to stop reading the article. Sure, you can show how brains are not "intrinsically intelligent" by using a proper definition of "intrinsically intelligent", but that's playing with definitions, and says little about the territory.

In particular, there is no such thing as “general” intelligence.

This is trivial to prove. If brains are not even "intelligent", they can hardly be "generally intelligent". ;)

In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. (...) The intelligence of a human is specialized in the problem of being human.

Yeah, someone has a clever definition of "highly specialized". Using this definition, even AIXI would be "highly specialized" in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also "highly specialized" in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.

If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.

Following the logic of previous paragraphs, if you cannot operate a computer without using a keyboard or a mouse, then you "cannot hope" to increase the computer's operating speed merely by buying faster processors and disks -- there will be no gains in the computing power unless you also upgrade the keyboard and the mouse.

There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130. In fact, many of the most impactful scientists tend to have had IQs in the 120s or 130s 

I guess someone never heard about this "base rates" stuff... (Highly specialized stuff, I guess.)

A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.

Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.

...giving up in the middle of the article, because I expect the rest to be just more of the same.

Replies from: DragonGod
comment by DragonGod · 2017-12-05T04:11:00.579Z · LW(p) · GW(p)

Going to reply here. I think the author is completely wrong, but you're missing several things.

Interpret this as a steelman. I do not agree with the author's conclusions or it's argument, but I think the essay was of pedagogical value. I think you're prematurely dismissing it.

---

This is trivial to prove. If brains are not even "intelligent", they can hardly be "generally intelligent". ;)

There is no generally intelligent algorithm. If you accept that intelligence is defined in terms of optimisation power, there is no intelligent algorithm that outperforms random search on all problems.

Worse there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.

If you define general intelligence as an intelligent algorithm that can optimise on all problems, then random search (and its derivatives) are the only generally intelligent algorithms.

Yeah, someone has a clever definition of "highly specialized". Using this definition, even AIXI would be "highly specialized" in the problem of being AIXI. And the hypothetical recursively self-improving general artificial intelligence is also "highly specialized" in the problem of being a recursively self-improving general artificial intelligence. No need to worry about it becoming too smart.

This follows from the fact that there is no generally intelligent algorithm (save random search). The vast majority of potential optimisation problems are intractable (I would say pathological, but I'm not sure that makes sense when I'm talking about the majority of problems). Most optimisation problems cannot be solved except via exhaustive search. Humanity's cognitive architecture is highly specialised in the problems it can solve. This is true for all non exhaustive search methods.

Today I learned: Exceptionally high-IQ humans are incapable of solving major problems.

Majority of exceptionally high IQ humans do not in fact solve major problems. There are millions of people in the IQ 150+ range. How many of them are academic heavyweights (Nobel prize laureates, field medalists, ACM Turing award winners, etc)?

...giving up in the middle of the article, because I expect the rest to be just more of the same.

I think you should finish it.

Replies from: Viliam
comment by Viliam · 2019-03-18T21:44:03.530Z · LW(p) · GW(p)
there is no intelligent algorithm that outperforms random search on most problems; this profound result is called the No Free Lunch Theorem.

I am not familiar with the context of this theorem, but I believe that this is a grave misinterpretation. From a brief reading, my impression is that the theorem says something like "you cannot find useful patterns in random data; and if you take all possible data, most of them are (Kolmogorov) random".

This is true, but it is relevant only for situations where any data is equally likely. Our physical universe seems not to be that kind of place. (It is true that in a completely randomly behaving universe, intelligence would not be possible, because any action or belief would have the same chance of being right or wrong.)

When I think about superintelligent machines, I imagine ones that would outperform humans in this universe. The fact that they would be equally helpless in a universe of pure randomness doesn't seem relevant to me. Saying that an AI is not "truly intelligent" unless it can handle the impossible task of skillfully navigating completely random universes... that's trying to win a debate by using silly criteria.

comment by sarahconstantin · 2017-12-05T05:52:01.032Z · LW(p) · GW(p)

So, Chollet is the author of the important deep-learning library Keras ( https://keras.io/), so at first glance there's reason to take him seriously. But I don't think this is a good essay. Some counter-arguments:

  • The No-Free-Lunch Theorem doesn't imply that relatively general reasoners can't exist; humans and animals are much more multipurpose than today's machine learning algorithms.
  • "High-IQ humans aren't ultra-powerful" doesn't seem to generalize well at all to AI. I don't know that IQ (a measure derived from psychometric tests) is even analogous to the kind of accuracy and performance metrics used in AI, though there's probably some correlation (high-IQ humans and good ML algorithms both perform well at chess and go).  Humans are affected by all kinds of phenomena like socioeconomic class and motivation which either aren't relevant to computers or would be classified as a type of "intelligence" if they were implemented in computers. And humans are much more powerful than animals, which is a more relevant scale of "differences in brain capacity" to artificial intelligence than variation between humans.
  • " Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred." This is a fully general argument against any unsolved problem ever being solved in the future; it's vacuous.
  • "no complex real-world system can be modeled as `X(t + 1) = X(t) * a, a > 1" -- unless "complex" is doing a lot of work there, this is obviously false. There are plenty of exponential-growth phenomena visible in the real world.  (Population growth and nuclear chain reactions, for instance.)
  • "The number of software developers has been booming exponentially for decades, and the number of transistors on which we are running our software has been exploding as well, following Moore’s law. Yet, our computers are only incrementally more useful to us than they were in 2012, or 2002, or 1992."  This is a subjective claim and not justified; but even if it's true, it doesn't mean that computer performance among various metrics can't grow exponentially; it has! But there's also declining marginal utility of any good (to humans) and Weber's law of logarithmic perception scales; exponential improvements in computer performance will by default seem like only linear improvements in "usefulness" to us.

Basically, the only thing here which is a valid argument against an intelligence explosion is the last section, which mentions bottlenecks and antagonistic processes (like the fact that collaboration among more people is more difficult, so more workers doesn't mean proportionately more progress.) This is basically Robin Hanson's argument against FOOM, and is several years old, and Chollet doesn't really add anything new here.
As an argument considered in a vacuum, I don't think this article provides any new reason to update away from believing in an intelligence explosion.  

The fact that Chollet believes there won't be an intelligence explosion is, of course, an update both against Chollet's credibility on AI futurism (if you already think intelligence explosions are likely) and against the likelihood of an intelligence explosion (if you're impressed with Chollet's achievements), but that doesn't tell you where belief propagation is going to converge, or even whether it will.

I really liked Chollet's earlier essay, The Future of Deep Learning, which is more technical and agrees with a lot of the conclusions that I came to independently.  I'm inclined to believe that Chollet writing for a general audience on Medium may be practicing propaganda, while his more concrete futurist predictions seem very credible. 

In the above article, he says, "As such, this perpetually-learning model-growing system could be interpreted as an AGI—an Artificial General Intelligence. But don't expect any singularitarian robot apocalypse to ensue: that's a pure fantasy, coming from a long series of profound misunderstandings of both intelligence and technology. This critique, however, does not belong here."

You could take this to mean something like "Yes, AGI is possible and I have just laid out a rough path to getting there; but I want to strongly disaffiliate with science-fiction geeks."

Replies from: DragonGod
comment by DragonGod · 2017-12-05T06:23:02.708Z · LW(p) · GW(p)

I agree with all your criticisms. I also think the article is wrong and didn't update except against Chollet, but I found the article educational.

Things I learned.

  1. No free lunch theorem. I'm very grateful for this, and it made me start learning more about optimisation.
  2. From the above: there is no general intelligence. I had previously believed that finding a God's algorithm for optimisation would be one of the most significant achievements of the century. I discovered that's impossible.
  3. Exponential growth does not imply exponential progress as exponential growth may meet exponential bottlenecks. This was also something I didn't appreciate. Upgrading from a level n intelligence to a level n+1 intelligence may require more relative intelligence than upgrading from a level n-1 to a level n. Exponential bottlenecks may result in diminishing marginal growth of intelligence.

The article may have seemed of significant pedagogical vale to me, because I hadn't met these ideas before. For example, I have just started reading the Yudkowsky-Hanson AI foom debate.

comment by ChristianKl · 2017-11-30T23:10:30.823Z · LW(p) · GW(p)
The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body.

I see no good reason to believe this. In vivo experiements in monkeys suggest that even after development it's possible to add additional colors via gene therapy to the eyes in a way that causes them to be integrated.

Nonvisual senses like knowing cardinal directions or magnetoperception also can be learned in vivo by adding sensors.

Out of memory I don't have concrete examples where humans or apes manged to deal with having a 8 legged body but I wouldn't be surprised if a human brain is fine at learning to do so, especially when it has that amount of limbs from birth.

I would guess that part of the reason why a 6 month year old human child is much more incapable than a 6 year old dog is that the human child depends a lot less on "hardcoded conceptions" then the puppy.

The author of the article even notes that a human child that didn't grow up in the "nurturing environment of human culture" is radically different then one that is, with suggests that a lot isn't hardcoded. Unfortunately, the author doesn't notice the contradiction.

There are currently about seven million people with IQs higher than 150 — better cognitive ability than 99.9% of humanity — and mostly, these are not the people you read about in the news.

There are less than seven million people with IQ's of 150. IQ is not normed for having 100 for the average citizen of the world.

Saying that very high IQ doesn't matter when the richest man has something like an 160 IQ (translated from the 1590/1600 SAT score) is also misleading.

US Federal Reserve leadership with Ben Bernake who scored like Gates and Janet Yellen for whom IQ numbers or SAT numbers aren't public but who got reportedly called by a college "Small lady with large IQ".

Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time.

We measure GDP growth by a percentage of last years growth and not by absolute numbers because we believe GDP growth is exponential and not linear.

Lastely, even if it would be true if you couldn't improve AI software, no AI software improvement is necessary for FOOM. In the Age of Em Robin Hanson lays out how Em's can go FOOM by simply increasing the production of the hardware on which they run without any improvment in their cognitive ability even if they are only as smart as the smartest humans. An AGI can play all of the same tricks and likely can improve their cognition if the process of Alpha Go is any indication.

comment by Gyrodiot · 2017-11-30T09:46:10.649Z · LW(p) · GW(p)

I think you wanted to link to this recent essay by François Chollet (AI researcher and designer of Keras, a well-known deep learning framework). The essay has also been discussed on Hacker News, also on Twitter.

I'm currently writing an answer to this one. I think it would be beneficial to have extra material about intelligence explosion which is disconnected from the "what should be done about it" question, which is so often tied to "sci-fi" scenarios.

Replies from: DragonGod
comment by DragonGod · 2017-11-30T12:11:18.696Z · LW(p) · GW(p)

Yes, that was my intention. I would edit this post to reflect it.

comment by RST · 2017-11-30T22:29:41.319Z · LW(p) · GW(p)

I am curious, what are the arguments that you find wrong? And what are your counterargument?

Replies from: DragonGod
comment by DragonGod · 2017-12-01T05:47:22.093Z · LW(p) · GW(p)
We understand flight - we can observe birds in nature, to see how flight works. The notion that aircraft capable of supersonic speeds are possible is fanciful."