What’s the likelihood of only sub exponential growth for AGI?

post by M. Y. Zuo · 2021-11-13T22:46:25.277Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    5 Logan Zoellner
    3 JBlack
    3 Dagon
    2 delton137
    2 Jon Garcia
    2 James_Miller
None
No comments

A possible scenario could be that the first confirmed AGI(s) are completely unimpressive,  i.e. with equivalent or less capacities than an average 10 year child. And then tremendous effort is put into ’growing’ its capacities with the best results yielding growth only along a sub exponential curve, probably with some plateaus as well. So that any AGI may take generations to become truly superhuman.

I’m asking this as I haven’t come across any serious prior discussion other than this: https://www.lesswrong.com/posts/77xLbXs6vYQuhT8hq/why-ai-may-not-foom [LW · GW], though admittedly my search was pretty brief.

Is there any serious expectation for this kind of scenario? 

Answers

answer by Logan Zoellner · 2021-11-14T19:51:57.099Z · LW(p) · GW(p)

Here are  some plausible ways we could be trapped at a "sub adult human" AGI:

  1.  There is no such thing as "general intelligence".  For example, profoundly autistic humans have the same size brains as normal human beings, but their ability to navigate the world we live in is limited by their weaker social skills.  Even an AI with many super-human skills could still fail to influence our world in this way.
  2. Artificial intelligence is possible, but it is extremely expensive.  Perhaps the first AGI requires an entire power-plant's worth of electricity to run.  Biological systems are much more efficient than manmade ones.  If Moores law "stops", we may be trapped in  a  future were only sub-human AI is affordable enough to be practical.
  3. Legal barriers.  Just as you are not legally allowed to carry a machine-gun wherever you please, AI may be regulated such that human-level AI is only allowed under very controlled circumstances.  Nuclear power is a classic example of an industry where innovation  stopped because of regulation.
  4. Status Quo Bias.  Future humans may simply not care as  much about building AGI as present humans.  Modern humans could undoubtedly build pyramids much taller than those in Egypt, but we don't because we aren't all that interested  in pyramid-building.
  5. Catastrophe.  Near human AGI may trigger a catastrophe that prevents further progress.  For example, the perception that "the first nation to build AGI will rule the world" may lead to an arms-race that ends in catastrophic world-war. 
  6. Unknown Unknowns.  Predictions are hard, especially about the future.
comment by Anon User (anon-user) · 2021-11-14T22:14:00.745Z · LW(p) · GW(p)

#5 is an interesting survival possibility...

comment by M. Y. Zuo · 2021-11-15T22:43:35.947Z · LW(p) · GW(p)

#1 resonates with me somehow. Perhaps because I’ve witnessed a few people in real life, profoundly autistics, or disturbed, or on drugs, speak somewhat like an informal spoken variant of GPT-3, or is it the other way around?

answer by JBlack · 2021-11-14T12:07:17.129Z · LW(p) · GW(p)

I don't think anyone really expects this sort of scenario, but it does make for some nice safe science fiction stories where humans get to play a meaningful role in the outcome of the plot.

Personally I think there are a few pretty major things working against it.

It seems likely that if we can get to chimpanzee-equivalent capability at all (about the minimum I'd call AGI), scaling up by a factor of 10 with only relatively few architectural tweaks will give something at least as generally capable as a human brain. Human brains are only about 4x the size and not apparently very much more complex per unit mass than the other great apes. Whatever the differences are, they developed in something like 1/1000th of the total history of our species. We're far too homogeneous in ability (on an inter-species absolute intelligence scale) with too recent evolution for anything to be fundamentally more complex about our brains compared with apes. If the apes had stagnated in intelligence for a billion years before making some intelligence breakthrough to us in a much shorter time, I may have had a different opinion. The evidence seems to point toward a change in neuron scaling in primates that meant a cheaper increase in neuron counts and not much else. As soon as this lowered the marginal cost of supporting more neurons below the benefits of having more neurons, our ancestors fairly steadily increased in both brain size and intelligence from there to here, or at least as steadily as evolution ever gets.

If there are fundamental barriers, then I expect them to be at least as far above us as we are above chimpanzees because there are no signs that we're anywhere near as good as it gets. We're most likely the stupidest possible species capable of nontrivial technology. If we weren't then we'd have most likely found evidence of earlier, stupider species on Earth that did it before us.

While I am not certain, I suspect that even otherwise chimpanzee equivalent AGIs enhanced with the narrow superhuman capabilities we have already built today might be able to outsmart us even while being behind us in some ways. While humans too can make use of narrow superhuman AI capabilities, we still have to use them as external tools limited by external interfaces, instead of integrated into our minds from birth and as automatic to us as focusing our eyes. There is every reason to expect that the relative gains would be very much greater.

Even if none of those are true, and general intelligence stops at 10-year-old human capability and they can't directly use our existing superhuman tools better than we can, I wouldn't bet heavily against a the possibility that merely scaling up speed 100 times - studying and planning for up to a subjective century each year - could let them work out how to get through the barrier to better capabilities in a decade or two. Similarly if they could learn in concert with others, all of them benefiting in some way directly rather than via comparatively slow linear language. There may be many other ways in which we are biologically limited but don't think of it as being important, because everything else we've ever known is too. Some AGIs might not share those limits, and work around their deficiencies in some respects by using capabilities that nothing else on Earth has.

comment by M. Y. Zuo · 2021-11-14T14:23:34.320Z · LW(p) · GW(p)

Thanks JBlack, those are some convincing points. Especially the likelihood that even a chimpanzee level intelligence directly interfaced to present day supercomputers would likely yield tangible performance greater than any human in many ways. Though perhaps the danger is lessened if for the first few decades the energy and space requirements are, at a minimum, equal to a present day supercomputing facility. It’s a big and obvious enough ’bright line’ so to speak.

answer by Dagon · 2021-11-13T21:53:55.628Z · LW(p) · GW(p)

IMO, the best argument that it won't be exponential (for very long) is that almost nothing is.  Many things that appear exponential are actually sigmoid, and even things that ARE exponential for a time hit a limit and either plateau or collapse.

The question isn't "is it exponential forever?", but "is it superlinear for long enough to foom?".  I don't think I've heard compelling data on either side of that question.

comment by JBlack · 2021-11-14T12:48:49.650Z · LW(p) · GW(p)

I don't think "exponential" vs "superlinear" or even "sublinear" matters much. Those are all terms for asymptotic behaviour, in the far future, and all the problems are in the relatively short term after the first AGI.

For FOOM purposes, how long it takes to get from usefully human level capabilities to as far above us as we are above chimpanzees (let's call it 300 IQ for short, despite the technical absurdity of the label) is possibly the most relevant timescale.

Could a few hundred humans wipe out a world full of chimpanzees in the long term? I'm pretty sure the answer is yes. If there exists an AGI that is as far above us as a human is above a chimpanzee, how long does it take for there to be a few hundred of them? My median estimate is "a few years" if the first one takes Manhattan Project levels of investment, or less if it doesn't.

After that point, our existence is no longer mostly in our own hands. If we're detrimental to whatever goals the AGI(s) have, we have a serious risk of becoming extinct shortly thereafter. Whether they FOOM to Jupiter-brain levels in 1 year or merely populate the Earth with more copies of 300 IQ AGIs that never grow past that in the next million years is irrelevant.

Personally I think the answer to the "first AGI to 300 IQ" timescale is somewhere between "0 days, we were in an overhang and the same advance that made AGI also made superhuman AGI" on the short end and something like 20 years on the long end. I tend toward the overhang end because I've seen a lot of algorithm improvements make large discontinuous jumps in capability in lots of fields including ML, and because I think that human capability range is a very narrow and twisty target to hit in absolute terms. By the time you get the weakest factor's capabilities up to minimum human-like General Intelligence levels, probably the rest of the factors are already superhuman.

So in my personal timescale estimates, that leaves "how long before we get the first AGI" as the most relevant.

comment by M. Y. Zuo · 2021-11-13T22:53:51.774Z · LW(p) · GW(p)

So it seems that even ‘fooming’ would be a coin toss as it stands?

answer by delton137 · 2021-11-14T22:26:54.783Z · LW(p) · GW(p)

It's hard to imagine a "general intelligence" getting stuck at the level of a 10 year child in all areas -- certainly it will have an ability to interface with hardware that allows it to perform rapid calculations or run other super-human algorithms. 

But there are some arguments that suggest intelligence scaling at an exponential rate can't go on indefinitely and in fact limitations to exponential growth ("foom") may be hit very soon after AGI is developed, so basically foom is impossible. For instance, see this article by Francois Chollet: 
https://medium.com/@francois.chollet/the-impossibility-of-intelligenceexplosion-5be4a9eda6ec

He makes a number of interesting points. For instance, he notes the slow development of science, despite exponentially more resources going into it. He also notes that science and other areas of human endeavor have recursive self improvement in them, but they seem to be growing linearly, not exponentially. 

Another point is that some (eg chaotic) physical systems are just impossible to predict over time scales of days or longer, even for superintelligent AI with vast computational resources. So there are some limitations there, at least. 

The other related reference I would recommend is this interview with Robin Hanson: https://aiimpacts.org/conversation-with-robin-hanson/

comment by M. Y. Zuo · 2021-11-16T03:46:27.693Z · LW(p) · GW(p)

Thanks for the links. It may be that the development of science, and of all technical endeavours in general, follow a pattern of punctuated equilibrium, that is sub linear growth, or even regression, for the vast majority of the time, interspersed by brief periods of tremendous change.

answer by Jon Garcia · 2021-11-14T02:38:22.741Z · LW(p) · GW(p)

I think that the mere development of an AGI with 10-year-old-human intelligence (or even infant-level) would first require stumbling across crucial generalizable principles of how intelligence works. In other words, by this time, there would have to be a working theory of intelligence that could probably be scaled up pretty straightforwardly. Then the only limit to an intelligence explosion would be limitations in hardware or energy resources (this may be more of a limitation while the theory of intelligence is still in its infancy; future designs might be more resource-efficient). I would expect economic pressure and international politics to create the perfect storm of unaligned incentives such that once a general theory is found, even if resource-intensive, you will see an exponential growth (actually sigmoidal as [temporary] hard limits are approached) in the intelligence of the biggest AGI systems.

You might find a rate-limiting step in the time it takes to train an AGI system, though. This would extend the window of opportunity for making any course corrections before superintelligence is reached. However, once it's trained, it might be easy to make a bunch of copies and combine them into a collective superintelligence, even if training a singleton ASI from scratch would take a much longer time on its own. Let's hope that a working theory of general alignment comes no later than a working theory of general intelligence.

comment by M. Y. Zuo · 2021-11-14T03:59:58.908Z · LW(p) · GW(p)

Thanks for the in-depth answer. The engineer side of me gets leery whenever ‘straightforward real world scaling following a working theory’ is a premise, the likelihood of there being no significant technical obstacles at all, other than resources and energy, seems vanishingly low. A thousand and one factors could impede in realizing even the most perfect theory, much like other complex engineered systems. Possible surprises such as some dependence on the substrate, on the specific arrangement of hardware, on other emergent factors, on software factors, etc...

Replies from: conor-sullivan
comment by Lone Pine (conor-sullivan) · 2021-12-27T22:34:43.421Z · LW(p) · GW(p)

If there is a general theory of intelligence and it scales well, there are two possibilities. Either we are already in a hardware overhang, and we get an intelligence explosion even without recursive self improvement. Or the compute required is so great that it takes an expensive supercomputer to run, in which case it’ll be a slow takeoff. The probability that we have exactly human intelligence levels of compute seems low to me. Probably we either have way too much or way too little.

answer by James_Miller · 2021-11-13T23:02:47.329Z · LW(p) · GW(p)

As discussed on this podcast I did with Robin Hanson, the UFOs seen by the US Navy might be aliens.  If this is true, the aliens would seem to have a preference to keep the universe in its natural state and so probably wouldn't let us create a paperclip maximizer.  These aliens might stop us from developing too powerful AI.

comment by Charlie Steiner · 2021-11-14T18:26:06.763Z · LW(p) · GW(p)

Although every sentence here is technically correct, I still feel I should share a link to a nice video explaining how you get the navy observations without any aliens being involved.

Replies from: James_Miller
comment by James_Miller · 2021-11-17T21:13:04.985Z · LW(p) · GW(p)

Sam Altman seems to take UFOs seriously.  See 17.14 of this talk.  

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-11-18T02:26:51.235Z · LW(p) · GW(p)

Okay? Is your implied point that Sam Altman, or Tyler Cowen, is such an epistemic authority figure that I too should take UFOs seriously?

Replies from: James_Miller
comment by James_Miller · 2021-11-18T02:56:07.261Z · LW(p) · GW(p)

Yes.  I don't know you so please don't read this as an insult.  But if Sam Altman and Tyler Cowen take an idea seriously don't you have to as well.  Remember that disagreement is disrespect so you saying that UFOs should not be taken seriously is your saying that you have a better reasoning process than either of those two men.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-11-18T11:46:02.011Z · LW(p) · GW(p)

I don't take it as an insult, I just think it's a wrong line of reasoning. I'm a pretty smart guy myself, but I'm sure I'm wrong about things too! (Even though for any specific thing I believe, I of course think I'm probably right.)

If someone else is right about something and I'm wrong, I don't want them to deferentially "take me seriously." I want them to be right and show me why I'm wrong - though not to the extent of spamming my email inbox. Makes me want to go re-read Guided By The Beauty Of Our Weapons.

Replies from: James_Miller
comment by James_Miller · 2021-11-18T13:12:14.967Z · LW(p) · GW(p)

Normally this is a good approach, but a problem with the UFOs are aliens theory is that there is a massive amount of evidence (much undoubtedly crap) the most important of which is likely classified top secret so you have to put a lot of weight on what other people (especially those with direct access to those with top secret security clearances) say they believe. 

Replies from: Charlie Steiner
comment by Charlie Steiner · 2021-11-18T13:43:26.745Z · LW(p) · GW(p)

The best photos of my house from space are also classified, yet I don't live in total ignorance about what my house looks like from above.

If the classified photos of my house did show something surprising (maybe I've got some graffiti that I've never noticed), I would update my beliefs, but because I have pretty good evidence about the top of my house already, I don't need to wait around for the best possible photos, and in fact I can give pretty good predictions of what those photos will look like.

Suppose my house is in some declassified Navy photos and there's a bird over my roof. If someone claims that my roof has a picture of a bird on it, this is very bad evidence for my house actually having a picture of a bird on it, and very good evidence for this person having made a mistake.

comment by M. Y. Zuo · 2021-11-14T00:03:26.152Z · LW(p) · GW(p)

Thanks, that does seem to be a possible motive for constant observation, and interference, if such aliens were to exist. 

No comments

Comments sorted by top scores.