The biological intelligence explosion

post by Rob Lucas · 2021-07-25T13:08:27.588Z · LW · GW · 5 comments

Contents

6 comments

Summary:  Human genetic engineering could lead to intelligence enhancement that leads to genetic engineers who are better at genetic engineering (and research on pathways to improving intelligence).  Which leads to a continuing process of greater and greater intelligence. This iterative process would be a human intelligence explosion.

There’s a view that AI will hit a point where it begins an intelligence explosion: an AI system will be designed that is better at designing AI systems than its designers were.  As such, it will be able modify its own design such that it, or a second generation version of it, will be created that is superior to it.  And this next version will thus be sufficiently advanced that it can create a more advanced version.  Etc.  You end up with an iterative process whose next generation progress is based on its current state, and as such an exponential growth, at least until some limiting factor is reached.  Hence, intelligence explosion.

This seems like a possible outcome, though the absolute rate of change isn’t clear.

But aside from computer intelligence, there’s another pathway to intelligences improving their own design: humans.  With current genome reading technology we are identifying a myriad of genes related to intelligence.  While each individual gene gives only a small effect, the interplay of many (hundreds) of such genes can be shown to have very large effects on IQ.

While gene-therapy is in it’s early stages, it’s a current emerging technology that’s undergoing rapid progress.  It is currently difficult to modify even individual genes in adult organisms: there are off target effects to worry about, it’s not possible to deliver the genes to every cell, the immune system will attack the viruses used for gene delivery, etc.  But there is already progress in solving all of these problems.  It’s not crazy to think that within a couple of decades we may be able to safely alter dozens or hundreds of genes in adult humans, and if not 100% of cells, a high enough percentage for effective therapy.

If we imagine such a world, we can see researches making use of such treatments to improve their own intelligence.  This then, can lead to clearer thinking, and more creativity, and both more and better ideas.  Which then could lead to the next wave of breakthoughs, perhaps in which genes to alter or in other avenues to improve intelligence.  And as those were developed and then implemented, said researchers reaping that benefit could then use their newly augmented intellect to iterate the next advances…

A biological intelligence explosion.

It would likely be much more limited than the AI intelligence explosion.  Human brains are constrained in various ways (size being an obvious one) that computers are not.  And AI could completely start from scratch and use a completely new sort of computing substrate in it’s next design, but that likely wouldn’t be an option for our human researchers who are manipulating already existent, living (so you don’t want to do something that risks killing them) human brains.  Nevertheless, even within the constraints, there still seems to be a lot of room for improvement, and each improvement should make the next one more likely.

Of course, maybe this will simply be taboo and not be done.  Or maybe AI will come along first.

But then, maybe not.

Now, there’s a question of whether or not worries about the emergence of an artificial super-intelligence might be mirrored in analogous worries about a resulting biological super intelligence.  I think that those worries are at least mitigated, though not  resolved, in this case, for a few reasons:

  1. As stated above, biological intelligence faces some hard to overcome constraints, like brain size given the human body, and the specific substrate of computation used being neurons.  These constraints seem unlikely to be overcome and thus impose hard limits on the max. Progress of a biological intelligence explosion.
  2. The Alignment problem is difficult in part because an AI system will be so alien to us.  Humans, on the other hand, are at least capable of understanding human values.  While this doesn’t mean that enhanced human intelligences will necessarily be aligned with unenhanced humans, it does mean that the problem may be more tractable.

However, that said, there still seem to be reasons for concern.  While there are hard limits on human intelligence, we don’t quite know where they are, and evolution certainly hasn’t reached them.  This is because the constraints faced in our ancestral environment have been severely loosened in a modern context.  Energy use, for instance, was a major constraint, but food today is very cheap and a brain using even 10 times as much energy could easily be supplied with enough calories for it’s computational work.  If that energy use reached 100 times current usage it might requite major changes to other organ systems, but that seems like a feasible late stage development in our intelligence explosion.  Survivability in our ancestral environment was also contained heavily by locomotion, but this is a much weaker constraint today.  So brain size, for instance, could get much larger before reaching a fundamental limit.  There are other things like which tasks brains are specialized for that could similarly be improved.  Mathematical aptitude, for instance, probably didn’t undergo very strong selection in the past but could be strongly favoured if it was seen as useful.  Etc.  All this suggests that while human intelligence would likely reach a limit far before AI did, that limit is quite far from the current level.

Similarly, while the alignment problem may be more tractable in humans, it’s certainly not solved.  We have elaborate political systems because we don’t simply trust that our neighbors share our goals, so it seems there’s little reason to assume that the super-intelligent would share the goals of the rest of society in general.  Moreover there is an actually harder problem with human-super-intelligence than with machine-super-intelligence, and that is that even at the beginning of the process we have no access to the source code.  There’s not chance to try to make sure the “machine’ (ie people) is aligned with us from the beginning.  To some extent it may be possible to do this with regulatory oversight of the enhancement process, but this seems a cruder tool than actually designing the system from scratch.

For these reasons I think there are similar concerns with a human intelligence explosion as have been discussed regarding an AI intelligence explosion.

5 comments

Comments sorted by top scores.

comment by Olomana · 2021-07-26T07:01:47.674Z · LW(p) · GW(p)

An Einstein runs on pretty much the same hardware as the rest of us.  If genetic engineering can get us to a planet full of Einsteins without running into hardware limitations, that may not qualify as an "intelligence explosion", but it's still a singularity in that we can't extrapolate to the future on the other side.

Another thought... genetic engineering may be what will make us smart enough to build a safe AGI.

comment by JBlack · 2021-07-26T03:45:41.460Z · LW(p) · GW(p)

Are you sure that a brain using 10 times as much energy could easily be supplied with enough calories?

Brains already use disproportionately more energy than most other parts of the body, averaging about 10 times the per-kilogram rate of the rest of the body on average. Scaling this up by another factor of 10 seems wildly optimistic. For example, the energy appetite of the brain is already so great that its function starts to be impaired within 5 seconds of ceasing supply of oxygenated blood, and about 10 seconds to unconsciousness.

Blood leaving the brain has a normal range of oxygen saturation down to around 55%, and that can't really go very much lower without running into serious problems with existing biology. To get 10x the energy you would therefore need to increase blood flow to the brain to be ten times that of baseline humans. This is actually twice the average blood flow baseline humans typically employ for their whole body. I don't think you could manage that without massive re-engineering not just within the brain but elsewhere as well.

This is even without considering waste products, heat dissipation, and so on that will all have their own challenges.

Replies from: Rob Lucas, aaron-franklin-esq
comment by Rob Lucas · 2021-07-26T07:46:42.841Z · LW(p) · GW(p)

Thanks for a good comment.  My oversimplified thought process was that a 10x increase in energy usage for the brain would equate to a ~2x increase in total energy usage.  Since we're able to maintain that kind of energy use during exercise, and elite athletes can maintain that for many hours/day, it seems reasonable that the heart and other organs could maintain this kind of output.

However, the issue you bring up, of actually getting that much blood to the brain, evacuating waste products,  doing the necessary metabolism there, and dealing with so much heat localized in the small area of the brain, are all valid.  While it seems like the rest of the body wouldn't be constrained by this level of energy use, a 10x power output in the brain probably might be a problem.

It's worth a more detailed analysis of exactly where the max. power output constraint on the brain, without any major changes, lie.

comment by Donald Hobson (donald-hobson) · 2021-08-07T15:11:03.569Z · LW(p) · GW(p)

Genetic modification takes time. If you are genetically modifying embryos, thats ~20 years before they are usefully contributing to your attempt to make better embryos. 

Maybe you can be faster when enhancing already grown brains. Maybe not. Either way, enhancing already grown brains introduces even more complications.

At some point in this process, a human with at most moderate intelligence enhancement decides it would be easier to make an AI from scratch than to push biology any further. And then the AI can improve itself at computer speeds. 

In short, I don't expect the biological part of the process to be that explosive. It might be enough to trigger an AI intelligence explosion.