Human-level is not the limit
post by Vishakha (vishakha-agrawal), Algon · 2025-04-16T08:33:15.498Z · LW · GW · 2 commentsThis is a link post for https://aisafety.info/questions/NM3A/3:-Human-level-is-not-the-limit
Contents
2 comments
This is an article in the new intro to AI safety series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
We’ve built technologies like skyscrapers that are much larger than us, bulldozers that are much stronger, airplanes that are much faster, and bridges that are much sturdier. Similarly, there are good reasons to think that machines can eventually become much more capable at general cognitive problem-solving.
On an abstract level, our intelligence came from evolution, and while evolution results in well-optimized systems, it can’t “think ahead” or deliberately plan design choices. Humans are just the first generally intelligent system that worked, and there’s no reason to expect us to be close to the most effective design. Moreover, evolution works under heavy constraints that don’t affect the creators of AI.
Advantages that AI “brains” can eventually gain over ours include:
- Sheer size. The size of human heads is limited by the constraint of them having to fit through the birth canal, but there’s nothing stopping an AI from being, say, the size of a warehouse. And we don’t know how many computations a human brain does, but most estimates imply that it’s a lot less than a current datacenter, let alone one built with future technology.
- Greater serial speed. Signals in a digital computer can propagate at millions of times the speed they do in a human brain. That means a computer could do the same computation in a millionth of the time, as long as it applied a million times the computing power per second. To a mind like that, we’d be almost frozen in time.
- The ability to replicate themselves as easily as copying some files on a computer. As a result of having shared structure and common knowledge of that structure, AIs could coordinate and share information and skills much more easily than human individuals. (There’s a blurry line between systems being very well-coordinated and being one large system — see the first point.) And there could be many of them as soon as they were created. In a science fiction story, a genius inventor might design one robot, and then have one robot. But AI in the real world is not like that — the computing power used to create a single model is massive and can then run huge numbers of copies of that model.
- The ability to self-modify more easily. We mostly can’t reach into our brains, but AI systems are software on a computer that can be edited. AI self-modification faces issues with Interpretability, at least in current systems. But not all self-modification requires interpretability. And AI could learn to understand its own workings, or build future, more easily interpretable systems. That would enable it to improve itself by editing its own code, in a way humans can’t do with their own brains.
- The ability to explore a much larger space of algorithms, and end up qualitatively smarter.
- The ability to be fully dedicated to its goals, working 24/7 without needing to rest or losing motivation. People sometimes argue that AI can’t become very powerful because the limiting factor in human success is something else than intelligence — perhaps determination or courage. But even if they’re right, the same processes that will optimize the intelligence of future systems can also optimize their determination and courage, and any other cognitive trait that matters.
Adding this all up, eventually, it becomes wrong to think of an advanced AI system as if it’s a single human genius — it becomes more like a hive of thousands or millions of supergeniuses in various fields, moving with perfect coordination and sharing information instantly. A system with such advantages wouldn’t be infinitely intelligent, or capable of solving any problem. But it would hugely outperform us in many important domains, including science, engineering, economic and military strategy, and persuasion.
This is often called Superintelligence — and although it might sound like a far future concern, we could see it a short time after AI reaches human level.
2 comments
Comments sorted by top scores.
comment by tailcalled · 2025-04-16T10:34:05.792Z · LW(p) · GW(p)
People sometimes argue that AI can’t become very powerful because the limiting factor in human success is something else than intelligence — perhaps determination or courage.
We live in something shockingly close to a post-scarcity society, at least if you come from a wealthy country like Denmark or the US. There are some limiting factors to success - e.g. if you get run over by a bus. But by and large You Can Just Do Things.
But even if they’re right, the same processes that will increase the intelligence of future systems can also increase their determination and courage, and any other cognitive trait that matters.
Mathematically impossible. If X matters then so does -X, but any increase in X corresponds to a decrease in -X.
In particular the easiest way to become richer is to own something that very predictably gives profit, since some people want to store value, and that makes them want to buy things that are as robustly producing profit as possible, so they bid the price of those things all the way up to where the profit is infinitesimal relative to it. But courage interferes with running a predictably profitable venture (because sometimes you charge into a loss), so this leads to a selection for reductions in courage, rather than increases in courage.
Replies from: Algon↑ comment by Algon · 2025-04-16T12:42:28.480Z · LW(p) · GW(p)
EDIT 2: Did you mean that there are advantages to having both courage and caution, so you can't have a machine that has maximal courage and maximal caution? That's true, but you can probably still make pareto improvements over humans in terms of courage and caution.
Would changing "increase" to "optimize" fix your objection? Also, I don't see how your first paragraph contradicts the first quoted sentence.
Mathematically impossible. If X matters then so does -X, but any increase in X corresponds to a decrease in -X.
I don't know how the second sentence leads to the first. Why should a decrease in -X lead to less success? Moreover, claims of mathematical impossibility are often over-stated.
As for the paragraph after, it seems like it assumes current traits being on some sort of pareto frontier of economic-fitness. (And, perhaps, an assumption of adequate equilibria). But I don't see why that'd be true. Like, I know of people who are more diligent than me, more intelligent, have lower discount rates etc. And they are indeed successful. EDIT: AFAICT, there's a tonne of frictions and barriers, which weaken the force of the economic argument I think you're making here.