Are Human Brains Universal?
post by DragonGod · 2022-09-15T15:15:21.302Z · LW · GW · 4 commentsThis is a question post.
Contents
Introduction Core Claim Cognitive Advantages of Artificial Intelligences Cognitive Superiority of Artificial Intelligence Equivalent Power? Universality? Caveat None Answers 10 Tomás B. 3 ChristianKl 2 TAG 1 the gears to ascenscion 1 tailcalled None 4 comments
[Previously] [LW · GW]
Introduction
After reading and updating on the answers to my previous question [LW · GW], I am still left unconvinced that the human brain is qualitatively closer to chimpanzee (let alone an ant/earthworm) than it is to hypothetical superintelligences.
I suspect a reason behind my obstinacy is an intuition that human brains are "universal" in a sense that chimpanzee brains are not. So, you can't really have other engines of cognition that are more "powerful" than human brains (in the way a Turing Machine is more powerful than a Finite State Automaton), only engines of cognition that are more effective/efficient.
By "powerful" here, I'm referring to the class of "real world" problems that a given cognitive architecture can learn within a finite time.
Core Claim
Human civilisation can do useful things that chimpanzee civilisation is fundamentally incapable of:
- Heavier than air flight
- Launching rockets
- High-fidelity long-distance communication
- Etc.
There do not seem to be similarly useful things that superintelligences are capable of that humans are also fundamentally incapable of. Useful things that we could never accomplish in the expected lifetime of the universe.
Superintelligences seem like they would just be able to do the things we are already — in principle — capable of, but more effectively and/or more efficiently.
Cognitive Advantages of Artificial Intelligences
I expect a superintelligence to be superior to humans quantitatively via:
- Larger working memories
- Faster clock cycles (5 GHz vs 0.1 - 2 Hz)
- Faster thought? [1]
- Larger attention spans
- Better recall
- Larger long term memories
(All of the above could potentially be a several orders of magnitude difference vs homo sapiens brain given sufficient compute.)
And qualitatively via:
- Parallel/multithreaded cognition
- The ability to simultaneously execute:
- Multiple different cognitive algorithms
- Multiple instances of the same cognitive algorithm
- Here too, the AI may have a several orders of magnitude difference in the number of thoughts/cognitive threads they can simultaneously maintain vs a human's "one"
- This may also be a quantitative difference, but it's the closest to a qualitatively different kind of cognition exclusive to AIs that has been proposed so far
- The ability to simultaneously execute:
Cognitive Superiority of Artificial Intelligence
I think the aforementioned differences are potent. And would confer the AI considerable advantage over humans:
For example:
- It could enable massively parallel learning, allowing the AI to attain immense breadth and depth of domain knowledge
- The AI could become a domain expert in virtually every domain of relevance (or at least domain of relevance to humans)
- Giving sufficient compute, the AI could learn millions of domains simultaneously
- This would give it a cross-disciplinary perspective/viewpoint that no human can attain
- The AI could become a domain expert in virtually every domain of relevance (or at least domain of relevance to humans)
- It could perform multiple cognitive processes at the same time while tackling a given problem
- This may be equivalent to having n minds collaborating on a problem but without any of the problems of collaboration, massively higher communication bandwidth and high fidelity sharing of rich and complex cognitive representations (unlike the lossy transmissions of language)
- It could simultaneously tackle every node of a well factorised problem
- The inherent limitations of population intelligences may not apply to a single mind running N threads
- Multithreaded thought may allow them to represent, manipulate and navigate abstractions that single threaded brains cannot (within reasonable compute)
- A considerable difference in what abstractions are available to them could constitute a qualitative difference
- Larger working memory could allow it to learn abstractions too large to fit in human brains
- The above may allow it to derive/synthesise insights that human brains will never find in any reasonable time frame
Equivalent Power?
My intuition is that there will be problems that it would take human mathematicians/scientists/philosophers centuries to solve that such an AI can probably get done in reasonable time frames. That's powerful.
But it still doesn't feel as large as the chimp to human gap. It feels like the AIs can do things much quicker/more efficiently than humans. Solve problems faster than we can.
It doesn't feel like the AI can solve problems that humans will never solve period[2] in the way that humans can solve many problems that chimpanzees will never solve period[3](most of mathematics, physics, computer science, etc).
It feels to me that the human brain — though I'm using human civilisation here as opposed to any individual human — is still roughly as "powerful" as this vastly superior engine of cognition. We can solve the exact same problems as superintelligences; they can just do it more effectively/efficiently.
I think the last line above is the main sticker. Human brains are capable of solving problems that chimpanzee society will never solve (unless they evolve to smarter species). I am not actually convinced that this much smarter AI can solve problems that humans will never solve?
Universality?
One reason the human brain would be equivalently powerful to a superintelligence would be that the human brain is "universal" in some sense (note that it would have to be a sense in which chimpanzee brains are not universal). If the human brain was capable of solving all "real world" problems, then of course there wouldn't be any other engines of cognition that were strictly more powerful.
I am not able to provide a rigorous definition of the sense of "universality" I mean here — but to roughly gesture in the direction of the concept I have in mind — it's something like "can eventually learn any natural "real world"[4] problem set/domain that another agent can learn".
Caveat
I think there's an argument that if there are (real world) problems that human civilisation can never solve[5] no matter what, we wouldn't be able to conceive/imagine them. I think this is kind of silly/find myself distrustful/sceptical of that line of reasoning.
We have universal languages (our natural languages also seem universal), so a description of such problems should be presentable in such languages. Though perhaps the problem description is too large to fit in working memory. But even then, it can still be stored electronically.
But more generally, I do not think that "I can coherently describe a problem" implies "I can solve the problem". There are many problems that I can describe but not solve[6], and I don't expect this to be broadly different for humans. If there are problems we cannot solve, I would still expect that we are able to describe them. I welcome suggestions for problems that you think human civilisation can never solve, but it's not particularly my primary inquiry here.
- ^
To be clear, I do not actually expect that the raw speed difference between CPU clock cycles and neuronal firing rates will straightforwardly translate to a speed of thought difference between human and artificial cognition (I expect a great many operations may be involved in a single thought, and I suspect intelligence won't just be that easy), but the sheer 9 order of magnitude difference does deserve consideration.
Furthermore, it needs to be stressed that the 0.1 - 2Hz figure is a baseline/average rate. Our maximum rate during periods of intense cognitive effort could well be significantly higher (this may be thought of as "overclocking").
- ^
To be clear, when I say "humans will never solve", I am imagining human civilisation not an individual human scientists. There are some problems that remained unsolved by civilisation for centuries. And while we may accelerate our solution of hard problems by developing thinking machines, I think we are only accelerating said solutions. I do not think there are problems that civilisation will just never solve if we never develop human level general AI.
- ^
Assuming that the intelligence of chimpanzees is roughly held constant or only drifts within a narrow range across generations. Chimpanzees evolving to considerably higher levels of intelligence would not still be "chimpanzees" for the purpose of my questions.
- ^
Though it may be better to replace "real world" with "useful". There may be some practical tasks that some organisms engage in, which the human brain cannot effectively "learn". But those tasks aren't useful for us to learn, so I don't feel they would be necessary for the notion of universality I'm trying to gesture at.
- ^
In case it was not clear, for the purposes of this question, the problems that "human civilisation can solve" refer to those problems that human civilisation can solve within the lifetime of the universe without developing human level general AI
- ^
The list of unsolved problems in computer science
The list of unsolved problems in physics
...
Provide many other examples
Answers
Human brains are demonstrably not equivalent in power to each other, let alone AGIs. Try teaching an 85 IQ person quantum physics and tell me our brains are universal learning machines.
↑ comment by jacob_cannell · 2022-09-16T09:21:19.034Z · LW(p) · GW(p)
Some brains being broken in various ways is not evidence that other brains are not universal learning machines. My broken laptop is not evidence that all computers are not turing machines.
Replies from: niknoble↑ comment by niknoble · 2022-09-17T18:09:17.702Z · LW(p) · GW(p)
Agreed. Also, it's not surprising that the universality threshold exists somewhere within the human range because we already know that humans are right by the cutoff. If the threshold were very far below the human range, then a less evolved species would have hit it before we came about, and they would have been the ones to kick off the knowledge explosion.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2022-09-17T20:50:40.354Z · LW(p) · GW(p)
Regardless of where the threshold is, evolution routinely randomly breaks things anyway - it's the only way it can find genes that have become redundant/useless. Sort of like ablation studies in machine learning papers.
↑ comment by DragonGod · 2022-09-15T15:28:44.960Z · LW(p) · GW(p)
I think this elides my query.
Even if it's true, it has no bearing on the question. The human range may be very wide, but it does not follow that it is so narrow, that more powerful systems exist. It does not touch on "the class of problems that human civilisation can solve without developing human level general AI", which is my proxy for the class of problems the human brain can solve.
And like the fact that some human can accomplish a task means the brain is capable of a task. The fact that some humans cannot accomplish the same task has no bearing on that.
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2022-09-15T17:23:38.008Z · LW(p) · GW(p)
It seems supremely unlikely that this universality threshold happens to be just at Terry Tao's level of intellect. Wherever this threshold is, it must be be below human hyper-genius level -as it is clear these geniuses are capable of understanding things the vast, vast, vast majority of people are not. Why posit this universality threshold thing exists when no one has seen it? David Deutsch has written huge swaths of text arguing for such a thing - I don't find it at all persuasive myself.
Replies from: niknoble, TAG, DragonGod↑ comment by niknoble · 2022-09-17T18:27:29.631Z · LW(p) · GW(p)
it is clear these geniuses are capable of understanding things the vast, vast, vast majority of people are not
As the original post suggests, I don't think this is true. I think that pretty much everyone in this comments section could learn any concept understood by Terry Tao. It would just take us longer.
Imagine your sole purpose in life was to understand one of Terry Tao's theorems. All your needs are provided for, and you have immediate access to experts whenever you have questions. Do you really think you would be incapable of it?
↑ comment by DragonGod · 2022-09-15T17:41:59.879Z · LW(p) · GW(p)
I am not thinking at the level of particular humans, but of the basic architecture of the human brain.
It appears that you can raise human geniuses from average parents if you train them early enough. And while human intelligence is hereditary, there isn't that much genetic drift within the human population.
And I'm unconvinced that in most cases it's "functional person cannot learn concept X" as opposed to: "it'll take functional person a lot more time/effort/energy/attention to learn concept X".
It may not be economical for most people to learn linear algebra (but I suspect most babies can in principle be raised so that they know linear algebra as adults).
Replies from: Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2022-09-15T17:53:23.869Z · LW(p) · GW(p)
>It appears that you can raise human geniuses from average parents if you train them early enough. And while human intelligence is hereditary, there isn't that much genetic drift within the human population.
Don't make me call Gwern!
Replies from: DragonGod↑ comment by DragonGod · 2022-09-15T18:09:03.084Z · LW(p) · GW(p)
Do please link the relevant post. I'd like to change my mind on this.
Replies from: Bjartur Tómas, Bjartur Tómas↑ comment by Tomás B. (Bjartur Tómas) · 2022-09-15T19:24:45.582Z · LW(p) · GW(p)
Ignore the other links I gave, I've just recalled a Steve Hsu post that is more to the point at hand: https://infoproc.blogspot.com/2010/05/psychometric-thresholds-for-physics-and.html
↑ comment by Tomás B. (Bjartur Tómas) · 2022-09-15T18:32:00.840Z · LW(p) · GW(p)
The Blank Slate is a good polemic on the topic. The Nurture Assumption is also good.
Gwern links:
There are problems that are complex for humans to solve without the help of a computer. To the extent that it's possible for humans to develop AGI, you can say that if you allow help from computers any problem that AGI can solve is per definition also a problem that humans can solve.
If you ask a question such as "Why is move X the better go move than Y" then today in some cases the only good answer is "Because AlphaGo says so". It might be possible for an AGI to definitely say that move X is better than Y and give the perfect go move for any situation, but the reasoning might be too complex for humans to understand without falling back on "because the neural net says so".
It might be that you can argue that the neural net that can definitely say what the best go move happens to be is no AGI. If you however look at a real-world problem like making the perfect stock investment given available data which requires integrating a lot of different data sources the complexity of that problem might need a neural net that has AGI complexity.
If we are fundamentally non-universal, there are problems we cannot even describe. Fermat's Last Theorem cannot even be stated in Piraha
the question isn't what class of problems can be understood, it's how efficiently can you jump to correct conclusions, check them, and build on them. any human can understand almost any topic, given enough interest and willingness to admit error that they actually try enough times to fail and see how to correct themselves. but for some, it might take an unreasonably long time to learn some fields, and they're likely to get bored before perseverance compensates for efficiency at jumping to correct conclusions.
in the same way, a sufficiently strong ai is likely to be able to find cleaner representations of the same part of the universe's manifold of implications, and potentially render the implications in parts of possibility space much further away than a human brain could given the same context, actions, and outcomes.
in terms of why we expect it to be stronger, because we expect someone to be able to find algorithms that are able to model the same parts of the universe as advanced physics folks study, with the same or better accuracy in-distribution and/or out-of-distribution, given the same order amount of energy burned as it takes to run a human brain. once the model is found it may be explainable to humans, in fact! the energy constraint seems to push it to be, though not perfectly. and likely the stuff too complex for humans to figure out at all is pretty rare - it would have to be pseudo-laws about a fairly large system, and would probably require seeing a huge amount of training data to figure it out.
semi-chaotic fluid systems will be the last thing intelligence finds exact equations for.
It's a bit of a strange question - why care if humans will solve everything that an AI will solve?
But ok.
Suppose you put an AI to solving a really big instance of a problem that it's really good at, so big of an instance that it takes an appreciable fraction of the lifespan of the universe to solve it.
In that case you already seem to be granting that it may be that it will take humans much longer to solve it, which I would assume could imply that humans run out of time before they don't have enough resources in the universe to solve it.
↑ comment by DragonGod · 2022-09-15T16:26:01.725Z · LW(p) · GW(p)
It's a bit of a strange question - why care if humans will solve everything that an AI will solve?
Because I started out convinced that human cognition is qualitatively closer to superintelligent cognition than it is to many expressions of animal cognition (I find the "human - ant dynamic" a very poor expression for the difference between human cognition and superintelligent cognition).
But ok.
Suppose you put an AI to solving a really big instance of a problem that it's really good at, so big of an instance that it takes an appreciable fraction of the lifespan of the universe to solve it.
In that case you already seem to be granting that it may be that it will take humans much longer to solve it, which I would assume could imply that humans run out of time before they don't have enough resources in the universe to solve it.
This may make the AI system more powerful than humans as I defined "powerful", but it doesn't meet my intuitive notions of more "powerful". It feels like the AI system is still just more effective/efficient.
When I mentioned "the expected lifetime of the universe", I was trying to gesture at monkeys typing randomly at a typewriter eventually producing the works of Shakespeare.
There are problems for which humans have no better shot at solving than random brute force search. But I think any problem that human civilisation (starting from 2022) spends millennia trying to solve only via random brute force are probably problems that superintelligences have no option but to try random brute force on.
Superintelligences cannot learn maximum entropy distributions either.
And even if I did decide to concede this point (though it doesn't map neatly to what I wanted), the class of problems that human civilisation can solve still seems closer to the class of problems that a superintelligence can solve than the class of problems that a chimpanzee can solve.
But honestly, this does little to intuition pump for me that human brains are not universal.
Replies from: tailcalled↑ comment by tailcalled · 2022-09-15T16:32:06.979Z · LW(p) · GW(p)
Because I started out convinced that human cognition is qualitatively closer to superintelligent cognition than it is to many expressions of animal cognition (I find the "human - ant dynamic" a very poor expression for the difference between human cognition and superintelligent cognition).
Qualitatively closer for what purpose?
Replies from: DragonGod↑ comment by DragonGod · 2022-09-15T17:14:35.544Z · LW(p) · GW(p)
The main distinction I'm drawing is something like. Humans can do useful things like build rockets that chimpanzees can never do.
Superintelligences can do useful things like .... "more effectively/efficiently than humans can". There doesn't seem to be the gap of not being able to do the thing at all.
Replies from: tailcalled↑ comment by tailcalled · 2022-09-15T17:35:54.948Z · LW(p) · GW(p)
Yes, but the appropriate way to draw the line likely depends on what the purpose of drawing the line is, so that is why I am asking about the purpose.
Replies from: DragonGod↑ comment by DragonGod · 2022-09-15T18:06:06.727Z · LW(p) · GW(p)
I've heard people analogise the gap between humans and superintelligences to the gaps between humans and ants, and that felt wrong to me, so I decided to investigate it?
Replies from: tailcalled, tailcalled↑ comment by tailcalled · 2022-09-16T08:18:46.912Z · LW(p) · GW(p)
To clarify, I would not consider that analogy cruxy at all. I don't tend to think of humans vs ants when reasoning about humans vs superintelligences, instead I tend to think about humans vs superintelligences.
↑ comment by tailcalled · 2022-09-15T18:46:51.547Z · LW(p) · GW(p)
We could imagine a planet-scale AI observing what's going on all over the world and coordinating giant undertakings as part of that. Its strategy could exploit subtle details in different locations that just happen to line up, unlike humans who have to delegate to others when the physical scale gets too big and who therefore have extremely severe bottleneck problems. By being literally physically as big relative to us as we are relative to ants, it doesn't seem like an unreasonable comparison to make.
But idc, I don't really tend to make animal comparisons when it comes to AGI.
4 comments
Comments sorted by top scores.
comment by Yitz (yitz) · 2022-09-16T21:45:05.760Z · LW(p) · GW(p)
Strong upvote for the great question—I don't have a definite answer for you, and would potentially be willing to concede the "universality" of human brains by your definition. I'm not sure how much that changes anything, though. For all practical purposes, I think we're in agreement that, say, most complex computational problems can't be solved by humans within a reasonable timeframe, but could be solved by a sufficiently large superintelligence fairly quickly.
comment by Vivek Hebbar (Vivek) · 2022-09-19T12:55:14.888Z · LW(p) · GW(p)
Faster clock cycles (5 GHz vs 0.1 - 2 GHz)
This is a typo; the source says "average firing rates of around 0.1Hz-2Hz", not GHz. This seems too low as a "clock speed", since obviously we can think way faster than 2 operations per second; my cached belief was 'order of 100 Hz'.
Replies from: DragonGod↑ comment by DragonGod · 2022-09-22T21:33:49.484Z · LW(p) · GW(p)
Thanks for pointing out the typo.
The cached belief is something I've repeatedly heard from Yudkowsky and Bostrom (or maybe I just reread/relistened to the same pieces from them), but as far as I'm aware, it has not proper citations.
I recall some mild annoyance at it not being substantiated. And I trust AI impacts' judgment here better than Yudkowsky/Bostrom.
This seems too low as a "clock speed", since obviously we can think way faster than 2 operations per second; my cached belief was 'order of 100 Hz'.
I think that's average firing rates. An average rate of < 1 thought per second doesn't actually seem implausible? Our burst cognitive efforts exceed that baseline ("overclocking"), but it tires us out pretty quickly.