Human-level is not the limit

post by Vishakha (vishakha-agrawal), Algon · 2025-04-16T08:33:15.498Z · LW · GW · 2 comments

This is a link post for https://aisafety.info/questions/NM3A/3:-Human-level-is-not-the-limit

Contents

2 comments

This is an article in the new intro to AI safety series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

We’ve built technologies like skyscrapers that are much larger than us, bulldozers that are much stronger, airplanes that are much faster, and bridges that are much sturdier. Similarly, there are good reasons to think that machines can eventually become much more capable at general cognitive problem-solving.

On an abstract level, our intelligence came from evolution, and while evolution results in well-optimized systems, it can’t “think ahead” or deliberately plan design choices. Humans are just the first generally intelligent system that worked, and there’s no reason to expect us to be close to the most effective design. Moreover, evolution works under heavy constraints that don’t affect the creators of AI.

Advantages that AI “brains” can eventually gain over ours include:

Adding this all up, eventually, it becomes wrong to think of an advanced AI system as if it’s a single human genius — it becomes more like a hive of thousands or millions of supergeniuses in various fields, moving with perfect coordination and sharing information instantly. A system with such advantages wouldn’t be infinitely intelligent, or capable of solving any problem. But it would hugely outperform us in many important domains, including science, engineering, economic and military strategy, and persuasion.

This is often called Superintelligence — and although it might sound like a far future concern, we could see it a short time after AI reaches human level.

2 comments

Comments sorted by top scores.

comment by tailcalled · 2025-04-16T10:34:05.792Z · LW(p) · GW(p)

People sometimes argue that AI can’t become very powerful because the limiting factor in human success is something else than intelligence — perhaps determination or courage.

We live in something shockingly close to a post-scarcity society, at least if you come from a wealthy country like Denmark or the US. There are some limiting factors to success - e.g. if you get run over by a bus. But by and large You Can Just Do Things.

But even if they’re right, the same processes that will increase the intelligence of future systems can also increase their determination and courage, and any other cognitive trait that matters.

Mathematically impossible. If X matters then so does -X, but any increase in X corresponds to a decrease in -X.

In particular the easiest way to become richer is to own something that very predictably gives profit, since some people want to store value, and that makes them want to buy things that are as robustly producing profit as possible, so they bid the price of those things all the way up to where the profit is infinitesimal relative to it. But courage interferes with running a predictably profitable venture (because sometimes you charge into a loss), so this leads to a selection for reductions in courage, rather than increases in courage.

Replies from: Algon
comment by Algon · 2025-04-16T12:42:28.480Z · LW(p) · GW(p)

EDIT 2: Did you mean that there are advantages to having both courage and caution, so you can't have a machine that has maximal courage and maximal caution? That's true, but you can probably still make pareto improvements over humans in terms of courage and caution. 

Would changing "increase" to "optimize" fix your objection? Also, I don't see how your first paragraph contradicts the first quoted sentence. 

Mathematically impossible. If X matters then so does -X, but any increase in X corresponds to a decrease in -X.

I don't know how the second sentence leads to the first. Why should a decrease in -X lead to less success? Moreover, claims of mathematical impossibility are often over-stated. 

As for the paragraph after, it seems like it assumes current traits being on some sort of pareto frontier of economic-fitness. (And, perhaps, an assumption of adequate equilibria). But I don't see why that'd be true. Like, I know of people who are more diligent than me, more intelligent, have lower discount rates etc. And they are indeed successful. EDIT: AFAICT, there's a tonne of frictions and barriers, which weaken the force of the economic argument I think you're making here.