post by Eliezer Yudkowsky (Eliezer_Yudkowsky)
Continuation of: Recursive Self-Improvement
Constant natural selection pressure, operating on the genes of the hominid line, produced improvement in brains over time that seems to have been, roughly, linear or accelerating; the operation of constant human brains on a pool of knowledge seems to have produced returns that are, very roughly, exponential or superexponential. (Robin proposes that human progress is well-characterized as a series of exponential modes with diminishing doubling times.)
Recursive self-improvement - an AI rewriting its own cognitive algorithms - identifies the object level of the AI with a force acting on the metacognitive level; it "closes the loop" or "folds the graph in on itself". E.g. the difference between returns on a constant investment in a bond, and reinvesting the returns into purchasing further bonds, is the difference between the equations y = f(t) = m*t, and dy/dt = f(y) = m*y whose solution is the compound interest exponential, y = e^(m*t).
When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM. An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely - far more unlikely than seeing such behavior in a system with a roughly-constant underlying optimizer, like evolution improving brains, or human brains improving technology. Our present life is no good indicator of things to come.
Or to try and compress it down to a slogan that fits on a T-Shirt - not that I'm saying this is a good idea - "Moore's Law is exponential now; it would be really odd if it stayed exponential with the improving computers doing the research." I'm not saying you literally get dy/dt = e^y that goes to infinity after finite time - and hardware improvement is in some ways the least interesting factor here - but should we really see the same curve we do now?
RSI is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a "hard takeoff" aka "AI go FOOM", but it's nowhere near being the only such factor. The advent of human intelligence was a discontinuity with the past even without RSI...
...which is to say that observed evolutionary history - the discontinuity between humans, and chimps who share 95% of our DNA - lightly suggests a critical threshold built into the capabilities that we think of as "general intelligence", a machine that becomes far more powerful once the last gear is added.
This is only a light suggestion because the branching time between humans and chimps is enough time for a good deal of complex adaptation to occur. We could be looking at the sum of a cascade, not the addition of a final missing gear. On the other hand, we can look at the gross brain anatomies and see that human brain anatomy and chimp anatomy have not diverged all that much. On the gripping hand, there's the sudden cultural revolution - the sudden increase in the sophistication of artifacts - that accompanied the appearance of anatomically Cro-Magnons just a few tens of thousands of years ago.
Now of course this might all just be completely inapplicable to the development trajectory of AIs built by human programmers rather than by evolution. But it at least lightly suggests, and provides a hypothetical illustration of, a discontinuous leap upward in capability that results from a natural feature of the solution space - a point where you go from sorta-okay solutions to totally-amazing solutions as the result of a few final tweaks to the mind design.
I could potentially go on about this notion for a bit - because, in an evolutionary trajectory, it can't literally be a "missing gear", the sort of discontinuity that follows from removing a gear that an otherwise functioning machine was built around. So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does demand the question of what those changes were. Something to do with reflection - the brain modeling or controlling itself - would be one obvious candidate. Or perhaps a change in motivations (more curious individuals, using the brainpower they have in different directions) in which case you wouldn't expect that discontinuity to appear in the AI's development, but you would expect it to be more effective at earlier stages than humanity's evolutionary history would suggest... But you could have whole journal issues about that one question, so I'm just going to leave it at that.
Or consider the notion of sudden resource bonanzas. Suppose there's a semi-sophisticated Artificial General Intelligence running on a cluster of a thousand CPUs. The AI has not hit a wall - it's still improving itself - but its self-improvement is going so slowly that, the AI calculates, it will take another fifty years for it to engineer / implement / refine just the changes it currently has in mind. Even if this AI would go FOOM eventually, its current progress is so slow as to constitute being flatlined...
So the AI turns its attention to examining certain blobs of binary code - code composing operating systems, or routers, or DNS services - and then takes over all the poorly defended computers on the Internet. This may not require what humans would regard as genius, just the ability to examine lots of machine code and do relatively low-grade reasoning on millions of bytes of it. (I have a saying/hypothesis that a human trying to write code is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.) The Future may also have more legal ways to obtain large amounts of computing power quickly.
This sort of resource bonanza is intriguing in a number of ways. By assumption, optimization efficiency is the same, at least for the moment - we're just plugging a few orders of magnitude more resource into the current input/output curve. With a stupid algorithm, a few orders of magnitude more computing power will buy you only a linear increase in performance - I would not fear Cyc even if ran on a computer the size of the Moon, because there is no there there.
On the other hand, humans have a brain three times as large, and a prefrontal cortex six times as large, as that of a standard primate our size - so with software improvements of the sort that natural selection made over the last five million years, it does not require exponential increases in computing power to support linearly greater intelligence. Mind you, this sort of biological analogy is always fraught - maybe a human has not much more cognitive horsepower than a chimpanzee, the same underlying tasks being performed, but in a few more domains and with greater reflectivity - the engine outputs the same horsepower, but a few gears were reconfigured to turn each other less wastefully - and so you wouldn't be able to go from human to super-human with just another sixfold increase in processing power... or something like that.
But if the lesson of biology suggests anything, it is that you do not run into logarithmic returns on processing power in the course of reaching human intelligence, even when that processing power increase is strictly parallel rather than serial, provided that you are at least as good as writing software to take advantage of that increased computing power, as natural selection is at producing adaptations - five million years for a sixfold increase in computing power.
Michael Vassar observed in yesterday's comments that humans, by spending linearly more time studying chess, seem to get linear increases in their chess rank (across a wide range of rankings), while putting exponentially more time into a search algorithm is usually required to yield the same range of increase. Vassar called this "bizarre", but I find it quite natural. Deep Blue searched the raw game tree of chess; Kasparavo searched the compressed regularities of chess. It's not surprising that the simple algorithm is logarithmic and the sophisticated algorithm is linear. One might say similarly of the course of human progress seeming to be closer to exponential, while evolutionary progress is closer to being linear. Being able to understand the regularity of the search space counts for quite a lot.
If the AI is somewhere in between - not as brute-force as Deep Blue, nor as compressed as a human - then maybe a 10,000-fold increase in computing power will only buy it a 10-fold increase in optimization velocity... but that's still quite a speedup.
Furthermore, all future improvements the AI makes to itself will now be amortized over 10,000 times as much computing power to apply the algorithms. So a single improvement to code now has more impact than before; it's liable to produce more further improvements. Think of a uranium pile. It's always running the same "algorithm" with respect to neutrons causing fissions that produce further neutrons, but just piling on more uranium can cause it to go from subcritical to supercritical, as any given neutron has more uranium to travel through and a higher chance of causing future fissions.
So just the resource bonanza represented by "eating the Internet" or "discovering an application for which there is effectively unlimited demand, which lets you rent huge amounts of computing power while using only half of it to pay the bills" - even though this event isn't particularly recursive of itself, just an object-level fruit-taking - could potentially drive the AI from subcritical to supercritical.
Not, mind you, that this will happen with an AI that's just stupid. But an AI already improving itself slowly - that's a different case.
Even if this doesn't happen - if the AI uses this newfound computing power at all effectively, its optimization efficiency will increase more quickly than before; just because the AI has more optimization power to apply to the task of increasing its own efficiency, thanks to the sudden bonanza of optimization resources.
So the whole trajectory can conceivably change, just from so simple and straightforward and unclever and uninteresting-seeming an act, as eating the Internet. (Or renting a bigger cloud.)
Agriculture changed the course of human history by supporting a larger population - and that was just a question of having more humans around, not individual humans having a brain a hundred times as large. This gets us into the whole issue of the returns on scaling individual brains not being anything like the returns on scaling the number of brains. A big-brained human has around four times the cranial volume of a chimpanzee, but 4 chimps != 1 human. (And for that matter, 60 squirrels != 1 chimp.) Software improvements here almost certainly completely dominate hardware, of course. But having a thousand scientists who collectively read all the papers in a field, and who talk to each other, is not like having one superscientist who has read all those papers and can correlate their contents directly using native cognitive processes of association, recognition, and abstraction. Having more humans talking to each other using low-bandwidth words, cannot be expected to achieve returns similar to those from scaling component cognitive processes within a coherent cognitive system.
This, too, is an idiom outside human experience - we have to solve big problems using lots of humans, because there is no way to solve them using ONE BIG human. But it never occurs to anyone to substitute four chimps for one human; and only a certain very foolish kind of boss thinks you can substitute ten programmers with one year of experience for one programmer with ten years of experience.
(Part of the general Culture of Chaos that praises emergence and thinks evolution is smarter than human designers, also has a mythology of groups being inherently superior to individuals. But this is generally a matter of poor individual rationality, and various arcane group structures that are supposed to compensate; rather than an inherent fact about cognitive processes somehow scaling better when chopped up into distinct brains. If that were literally more efficient, evolution would have designed humans to have four chimpanzee heads that argued with each other. In the realm of AI, it seems much more straightforward to have a single cognitive process that lacks the emotional stubbornness to cling to its accustomed theories, and doesn't need to be argued out of it at gunpoint or replaced by a new generation of grad students. I'm not going to delve into this in detail for now, just warn you to be suspicious of this particular creed of the Culture of Chaos; it's not like they actually observed the relative performance of a hundred humans versus one BIG mind with a brain fifty times human size.)
So yes, there was a lot of software improvement involved - what we are seeing with the modern human brain size, is probably not so much the brain volume required to support the software improvement, but rather the new evolutionary equilibrium for brain size given the improved software.
Even so - hominid brain size increased by a factor of five over the course of around five million years. You might want to think very seriously about the contrast between that idiom, and a successful AI being able to expand onto five thousand times as much hardware over the course of five minutes - when you are pondering possible hard takeoffs, and whether the AI trajectory ought to look similar to human experience.
A subtler sort of hardware overhang, I suspect, is represented by modern CPUs have a 2GHz serial speed, in contrast to neurons that spike 100 times per second on a good day. The "hundred-step rule" in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in realtime has to perform its job in less than 100 serial steps one after the other. We do not understand how to efficiently use the computer hardware we have now, to do intelligent thinking. But the much-vaunted "massive parallelism" of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain's serial slowness - if your computer ran at 200Hz, you'd have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.
So that's another kind of overhang: because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don't know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.
A still subtler kind of overhang would be represented by human failure to use our gathered experimental data efficiently.
On to the topic of insight, another potential source of discontinuity. The course of hominid evolution was driven by evolution's neighborhood search; if the evolution of the brain accelerated to some degree, this was probably due to existing adaptations creating a greater number of possibilities for further adaptations. (But it couldn't accelerate past a certain point, because evolution is limited in how much selection pressure it can apply - if someone succeeds in breeding due to adaptation A, that's less variance left over for whether or not they succeed in breeding due to adaptation B.)
But all this is searching the raw space of genes. Human design intelligence, or sufficiently sophisticated AI design intelligence, isn't like that. One might even be tempted to make up a completely different curve out of thin air - like, intelligence will take all the easy wins first, and then be left with only higher-hanging fruit, while increasing complexity will defeat the ability of the designer to make changes. So where blind evolution accelerated, intelligent design will run into diminishing returns and grind to a halt. And as long as you're making up fairy tales, you might as well further add that the law of diminishing returns will be exactly right, and have bumps and rough patches in exactly the right places, to produce a smooth gentle takeoff even after recursion and various hardware transitions are factored in... One also wonders why the story about "intelligence taking easy wins first in designing brains" tops out at or before human-level brains, rather than going a long way beyond human before topping out. But one suspects that if you tell that story, there's no point in inventing a law of diminishing returns to begin with.
(Ultimately, if the character of physical law is anything like our current laws of physics, there will be limits to what you can do on finite hardware, and limits to how much hardware you can assemble in finite time, but if they are very high limits relative to human brains, it doesn't affect the basic prediction of hard takeoff, "AI go FOOM".)
The main thing I'll venture into actually expecting from adding "insight" to the mix, is that there'll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code; it must be able to, say, rewrite Artificial Intelligence: A Modern Approach (2nd Edition). An ability like this seems (untrustworthily, but I don't know what else to trust) like it ought to appear at around the same time that the architecture is at the level of, or approaching the level of, being able to handle what humans handle - being no shallower than an actual human, whatever its inexperience in various domains. It would produce further discontinuity at around that time.
In other words, when the AI becomes smart enough to do AI theory, that's when I expect it to fully swallow its own optimization chain and for the real FOOM to occur - though the AI might reach this point as part of a cascade that started at a more primitive level.
All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the "fold the curve in on itself" paradigm of recursion is going to amplify even small roughnesses in the trajectory.
So I stick to qualitative predictions. "AI go FOOM".
Tomorrow I hope to tackle locality, and a bestiary of some possible qualitative trajectories the AI might take given this analysis. Robin Hanson's summary of "primitive AI fooms to sophisticated AI" doesn't fully represent my views - that's just one entry in the bestiary, albeit a major one.
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by luzr ·
2008-12-02T21:16:59.000Z · LW(p) · GW(p)
"I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less."
I am glad I can agree for once :)
"The main thing I'll venture into actually expecting from adding "insight" to the mix, is that there'll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory. An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code;"
Anyway, my problem with your speculation about hard takeoff is that you seem to do the same conceptual mistake that you so dislike about Cyc - you seem to thing that AI will be mostly "written in the code".
I suspect it is very likely that the true working AI code will be relatively small and already pretty well optimized. The "mind" itself will be created from it by some self-learning process (my favorite scenario involves weak AI as initial "tutor") and in fact will be mostly consist of vast amount of classification coeficients and connections or something like that (think bayesian or neural networks).
While it probably will be in AI power to optimize its "primal algorithm", gains there will be limited (it will be pretty well optimized by humans anyway). The ability to reorganize its "thinking network" might be severely low. Same as with human - we nearly understand how single neuron work, but are far from understanding the whole network. Also, with any further possible self-improvement, the complexity grows further and it is quite reasonable to predict this complexity will grow faster than AI ability to understand it.
I think it all boils down to very simple showstopper - considering you are building perfect simulation, how many atoms you need to simulate a atom? (BTW, this is also showstopper for "nested virtual reality" idea).
Note however that this whole argument is not really mutually exclusive with hard takeoff. AI still can build next generation AI that is better. But "self" part might not work. (BTW, interesting part is that "parent" AI might then face the same dilemma with descendant's friendliness ;)
I also thing that in all your "foom" posts, you understimate empirical form of knowledge. It sounds like you expect AI to just sit down in the cellar and think, without much inputs and actions, then invent the theory of everything and take over the world.
That is not going to happen at least for the same reason why the endless chain of nested VRs is unlikely.
Replies from: rkyeun, nshepperd
↑ comment by rkyeun ·
2011-09-30T11:34:04.741Z · LW(p) · GW(p)
"I think it all boils down to very simple showstopper - considering you are building perfect simulation, how many atoms you need to simulate a atom?"
The answer to that question is a blatant "At most, one." The universe is already shaped like itself.
Replies from: JoshuaZ
↑ comment by JoshuaZ ·
2011-09-30T12:41:53.235Z · LW(p) · GW(p)
Yes, but the bound of "at least one" is very likely to be true for lots of purposes also if our understanding of the laws of physics is at all near correct.
↑ comment by nshepperd ·
2011-09-30T15:09:00.894Z · LW(p) · GW(p)
I think it all boils down to very simple showstopper - considering you are building perfect simulation, how many atoms you need to simulate a atom?
Perfect simulation is not the only means of self-knowledge.
As for empirical knowledge, I'm not sure Eliezer expects an AI to take over the world with no observations/input at all, but he does think that people do far overestimate the amount of observations an effective AI would need.
(Also, for an AI, "building a new AI" and "self-improving" are pretty much the same thing. There isn't anything magic about "self". If the AI can write a better AI, it can write a better AI; whether it calls that code "self" or not makes no difference. Granted, it may be somewhat harder for the AI to make sure the new code has the same goal structure if it's written from scratch, but there's no particular reason it has to start from scratch.)
comment by Phil_Goetz6 ·
2008-12-02T22:21:40.000Z · LW(p) · GW(p)
"All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the "fold the curve in on itself" paradigm of recursion is going to amplify even small roughnesses in the trajectory."
Wouldn't that be a reason to say, "I don't know what will happen"? And to disallow you from saying, "An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely"?
If you can't make quantitative predictions, then you can't say that the foom might take an hour or a day, but not six months.
A lower-bound (of the growth curve) analysis could be sufficient to argue the inevitability of foom.
I agree there's a time coming when things will happen too fast for humans. But "hard takeoff", to me, means foom without warning. If the foom doesn't occur until the AI is smart enough to rewrite an AI textbook, that might give us years or decades of warning. If humans add and improve different cognitive skills to the AI one-by-one, that will start a more gently-sloping RSI.
comment by GenericThinker ·
2008-12-03T03:17:29.000Z · LW(p) · GW(p)
"But the much-vaunted "massive parallelism" of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain's serial slowness - if your computer ran at 200Hz, you'd have to resort to all sorts of absurdly massive parallelism to get anything done in realtime. I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less."
That is just patently false, the brain is massively parallel and the parallelism is not cache look-ups it would be more like current GPUs. The computational estimate does not take into account for why the brain has as much computational power as it does ~10^15 or more. When you talk about relative speed what you have to remember is that we are tied to our perception of time which is roughly between 30-60FPS. Having speeds beyond 200Hz isn't necessary since the brain doesn't have RAM or caches like a traditional computer to store solutions in advance in the same way. By having the speed at 200Hz the brain can run fast enough to give us real-time perceptions while having time to do multi-step operations. A nice thing would be if we could think about multiple things in parallel the way a computer with multiple processors can focus on more then one application at the same time.
I think all these discussions of the brains speed are fundamentally misguided, and show lack of understanding of current neuroscience computational or otherwise. Since to say run the brain at 2Ghz what would that mean? How would that work with our sensory systems? If you only have one processing element with only 6-12 functional units then 2Ghz is nice if you have billions of little processors and your senses all run around 30-60FPS then 200Hz is just fine without being overkill unless your algorithms require more then 100 serial steps. My guess would be that brain does a form of parallel algorithms to process information to limit that possibility.
On the issue of mental processing power look at savants, some of them can count in primes all day long or can recite a million digits of pi. For some reason the disfunction in their brains allows them to tap into all sorts of computational power. The big issue with the brain is that we cannot focus on multiple things and the way in which we perform for example math is not nearly as streamlined as a computer. For may own part I am at my limit multiplying a 3 digit number by a 3 digit number in my head. This is of course a function of many things but it is in part a function of the limitations of short term memory and the way in which our brains allow us to do math.
comment by anon19 ·
2008-12-03T04:05:40.000Z · LW(p) · GW(p)
luzr: You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same? Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity. And you don't have to simulate reality perfectly to understand it, so there is no showstopper there. Total simulation is what we do when we don't have anything better.
comment by Aron ·
2008-12-03T04:27:44.000Z · LW(p) · GW(p)
What could an AI do, yet still be unable to self-optimize? Quite a bit it turns out: everything that a modern human can do as a minimum and possibly a great deal more since we have yet to demonstrate that we can engineer intelligence. (I admit here that it may be college-level material once discovered)
If we define the singularity as the wall beyond which is unpredictable, I think we can have an effective singularity without FOOM. This follows from admitting that we can have computers that are superior to us in every way, without even achieving recursive modification. These machines then have all the attendant advantages of limitless hardware, replicability, perfect and expansive memory, deep serial computation, rationality by design, limitless external sensors, etc.
if it is useless to predict past the singularity, and if foom is unlikely to occur prior to the singularity, does this make the pursuit of friendliness irrelevant? Do we have to postulate foom = singularity in order to justify friendliness?
comment by Ian_C. ·
2008-12-03T04:51:24.000Z · LW(p) · GW(p)
"So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does beg the question of what those changes were."
Perhaps the final cog was language. The original innovation is concepts: the ability to process thousands of entities at once by forming a class. Major efficiency boost. But chimps can form basic concepts and they didn't go foom.
Because forming a concept is not good enough - you have to be able to do something useful with it, to process it. Chimps got stuck there, but we passed abstractions through our existing concrete-only processing circuits by using a concrete proxy (a word).
comment by michael_vassar3 ·
2008-12-03T05:52:23.000Z · LW(p) · GW(p)
Phil: It seems to me that the above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off, but if take-off took six months I still wouldn't expect that humans would be able to react. The AGI would probably be able to remain hidden until it was in a position to create a singleton extremely suddenly.
Aron: It's rational to plan for the most dangerous survivable situations.
However, it doesn't really make sense to claim that we can build computers that are superior to ourselves but that they can't improve themselves, since making them superior to us blatantly involves improving them. That said, yes it is possible that some other path to the singularity could produce transhuman minds that can't quickly self-improve and which we can't quickly improve, for instance drug enhanced humans, in which case hopefully those transhumans would share our values well enough that they could solve Friendlyness for us.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) ·
2008-12-03T08:42:56.000Z · LW(p) · GW(p)
AC, "raise the question" isn't strong enough. But I am sympathetic to this plea to preserve technical language, even if it's a lost cause; so I changed it to "demand the question". Does anyone have a better substitute phrase?
All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff.Phil: Wouldn't that be a reason to say, "I don't know what will happen"? And to disallow you from saying, "An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely"?
These are different problems, akin to "predict exactly where Apophis will go" and "estimate the size of the keyhole it has to pass through in order to hit Earth". Or "predict exactly what this poorly designed AI will end up with as its utility function after it goes FOOM" versus "predict that it won't hit the Friendliness keyhole".
A secret of a lot of the futurism I'm willing to try and put any weight on, is that it involves the startling, amazing, counterintuitive prediction that something ends up in the not-human space instead of the human space - humans think their keyholes are the whole universe, because it's all they have experience with. So if you say, "It's in the (much larger) not-human space" it sounds like an amazing futuristic prediction and people will be shocked, and try to dispute it. But livable temperatures are rare in the universe - most of it's either much colder or much hotter. A place like Earth is an anomaly, though it's the only place beings like us can live; the interior of a star is much denser than the materials of the world we know, and the rest of the universe is much closer to vacuum.
So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. When you look at it that way, it's not so radical a prediction, is it?
comment by luzr ·
2008-12-03T08:59:34.000Z · LW(p) · GW(p)
"You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same?"
I hope it will. Still, that would get it only to preexisting knowledge.
It can draw many hypothesis, but it will have to TEST them (gain empirical knowledge). Think LHC.
BTW, not that there are problems in quantum physics that do not have analytical solution. Some equations simply cannot be solved. Now of course, perhaps superintelligence will find how to do that, but I believe there are quite solid mathematic proofs that it is not possible.
Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity.
I am afraid that you have missed the part about algorithm being essential, but not the core of AI mind. The mind can as well be data. And it can be unoptimizable, for the same reasons some of equations cannot be analytically solved.
And you don't have to simulate reality perfectly to understand it, so there is no showstopper there.
To understand certain aspects of reality. All I am saying is that to understand certain aspects might not be enough.
What I suggest is that the "mind" might be something as network of interconnected numerical values. For the outside observer, there will be no order in connections or values. To truly understand the "mind" a poorly as by simulation, you would need much bigger mind, as you would have to simulate and carefully examine each of nodes.
Crude simulation does not help here, because you do not know which aspects to look for. Anything can be important.
comment by Tim_Tyler ·
2008-12-03T09:45:17.000Z · LW(p) · GW(p)
The above qualitative analysis is sufficient to strongly suggest that six months is an unlikely high-end estimate for time required for take-off
We've been using artificial intelligence for over 50 years now. If you haven't start the clock already, why not? What exactly are you waiting for? There is never going to be a point in the future where machine intelligence "suddenly" arises. Machine intelligence is better than human intelligence in many domains today. Augmented, cultured humans are a good deal smarter than unmodified ones today. Eschew machines, and see what kind of paid job you get with no phone, computer, or internet if you want to see that for yourself.
Replies from: timtyler
comment by Thomas ·
2008-12-03T10:43:59.000Z · LW(p) · GW(p)
"So I stick to qualitative predictions. "AI go FOOM"."
Even if it is wrong - I think it is correct - it is the most important thing to consider.
comment by Vladimir_Slepnev ·
2008-12-03T14:22:50.000Z · LW(p) · GW(p)
I have a saying/hypothesis that a human trying to write code is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.
Eliezer, this sounds wrong to me. Acquired skills matter more than having a sensory modality. Computers are quite good at painting, e.g. see the game Crysis. Painting with a brush isn't much easier than pixel by pixel, and it's not a natural skill. Neither is the artist's eye for colour and shape, or the analytical ear for music (do you know the harmonies of your favourite tunes?) You can instantly like or dislike a computer program, same as a painting or a piece of music: the inscrutable inner workings get revealed in the interface.
comment by Ben_Jones ·
2008-12-03T15:16:49.000Z · LW(p) · GW(p)
because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don't know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.
Now there's a scary thought.
comment by Phil_Goetz6 ·
2008-12-03T17:45:14.000Z · LW(p) · GW(p)
Eliezer: So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM.
But the AI is tied up with the human timescale at the start. All of the work on improving the AI, possibly for many years, until it reaches very high intelligence, will be done by humans. And even after, it will still be tied up with the human economy for a time, relying on humans to build parts for it, etc. Remember that I'm only questioning the trajectory for the first year or decade.
(BTW, the term "trajectory" implies that only the state of the entity at the top of the heap matters. One of the human race's backup plans should be to look for a niche in the rest of the heap. But I've already said my piece on that in earlier comments.)
Thomas: Even if it is wrong - I think it is correct - it is the most important thing to consider.
I think most of us agree it's possible. I'm only arguing that other possibilities should also be considered. It would be unwise to adopt a strategy that has a 1% chance of making 90%-chance situation A survivable, if that strategy will make the otherwise-survivable 10%-chance situation B deadly.
comment by Vladimir_Golovin ·
2008-12-03T19:43:11.000Z · LW(p) · GW(p)
Computers are quite good at painting, e.g. see the game Crysis.
They do that using dedicated hardware. Try to paint Crysis in realtime 'per pixel', using a vanilla CPU.
comment by luzr ·
2008-12-04T09:30:07.000Z · LW(p) · GW(p)
They do that using dedicated hardware. Try to paint Crysis in realtime 'per pixel', using a vanilla CPU.
Interestingly, today's high-end vanilla CPU (quadcore at 3Ghz) would paint 7-8 years old games just fine. Means in another 8 years, we will be capable of doing Crysis without GPU.
comment by Benya (Benja) ·
2012-09-13T12:26:58.391Z · LW(p) · GW(p)
But this is generally a matter of poor individual rationality, and various arcane group structures that are supposed to compensate; rather than an inherent fact about cognitive processes somehow scaling better when chopped up into distinct brains. If that were literally more efficient, evolution would have designed humans to have four chimpanzee heads that argued with each other.
Note to self: Yes, you're right that the notion that evolution had a smooth path open to four-headed hominids is ludicrous, and the benefit of a well-designed four-headed human can't be evaluated based on this. But two chimpanzee brain hemispheres arguing with each other wouldn't be nearly as ludicrous, so don't reject Eliezer's argument because of this point.
comment by EGI ·
2013-02-23T15:06:43.805Z · LW(p) · GW(p)
If that were literally more efficient, evolution would have designed humans to have four chimpanzee heads that argued with each other.
No! This is what a human engineer would have done. Evolution cannot do that! (Though the premise is still correct.)
comment by dlrlw ·
2015-03-17T12:49:02.163Z · LW(p) · GW(p)
What is "FOOM"? Is it an acronym? What does it stand for?
Wordnik says "The sound of a muffled explosion." But that doesn't sound right. If AI goes FOOM, the explosion presumably won't be 'muffled'. :-)
Replies from: JoshuaZ, Good_Burning_Plastic, None, Lumifer
↑ comment by JoshuaZ ·
2015-03-17T13:16:37.952Z · LW(p) · GW(p)
It is sometimes used to suggest an extremely rapid AI self-improvement. I don't know where the word originates but I think it is intended as some sort of sound-effect.
↑ comment by Good_Burning_Plastic ·
2015-03-17T13:28:54.163Z · LW(p) · GW(p)
The Jargon page says "Onomatopoetic vernacular for an intelligence explosion." I have no idea why isn't something louder-sounding like "BOOM" (or "Boom!!!") instead either.
Replies from: Lumifer
↑ comment by Lumifer ·
2015-03-17T15:22:22.197Z · LW(p) · GW(p)
I have no idea why isn't something louder-sounding like "BOOM"
I suspect it's not loud because of
"'But oh, beamish nephew, beware of the day,
If your Snark be a Boojum! For then
You will softly and suddenly vanish away,
And never be met with again!'
↑ comment by [deleted] ·
2015-03-17T13:41:38.356Z · LW(p) · GW(p)
I think it may suggest a non-violent, non-destructive rapid expansion, like how a shaving foam goes foom :)
↑ comment by Lumifer ·
2015-03-17T15:20:08.152Z · LW(p) · GW(p)
FOOM, in the context of LW, is extremely rapid take-off of artificial intelligence.
If an AI can improve itself and the rate at which it improves is itself a function of how good (=smart) it is, the growth of its capabilities will resemble exponential and at some point will rapidly escalate into the superhuman realm. That is FOOM.
Replies from: Mader_Levap