The Intelligence Curse: an essay series
post by L Rudolf L (LRudL), lukedrago · 2025-04-24T12:59:15.247Z · LW · GW · 2 commentsContents
Chapters 1. Introduction 2. Pyramid Replacement 3. Capital, AGI, and Human Ambition 4. Defining the Intelligence Curse 5. Shaping the Social Contract 6. Breaking the Intelligence Curse 7. History is Yours to Write None 2 comments
We've published an essay series on what we call the intelligence curse. Most content is brand new, and all previous writing has been heavily reworked.
Visit intelligence-curse.ai for the full series.
Below is the introduction and table of contents.
We will soon live in the intelligence age. What you do with that information will determine your place in history.
The imminent arrival of AGI has pushed many to try to seize the levers of power as quickly as possible, leaping towards projects that, if successful, would comprehensively automate all work. There is a trillion-dollar arms race to see who can achieve such a capability first, with trillions more in gains to be won.
Yes, that means you’ll lose your job. But it goes beyond that: this will remove the need for regular people in our economy. Powerful actors—like states and companies—will no longer have an incentive to care about regular people. We call this the intelligence curse.
If we do nothing, the intelligence curse will work like this:
- Powerful AI will push automation through existing organizations, starting from the bottom and moving to the top.
- AI will obsolete even outlier human talent. Social mobility will stop, ending the social dynamism and progress that it drives.
- Non-human factors of production, like capital, resources, and control over AI, will become overwhelmingly more important than humans.
- This will usher in incentives for powerful actors around the world that break the modern social contract.
- This could result in the gradual—or sudden—disempowerment of the vast majority of humanity.
But this prophecy is not yet fulfilled; we reject the view that this path is inevitable. We see a different future on the horizon, but it will require a deliberate and concerted effort to achieve it.
We aim to change the incentives driving the intelligence curse, maintaining human economic relevance and strengthening our democratic institutions to withstand what will likely be the greatest societal disruption in history.
To break the intelligence curse, we should chart a different path on the tech tree, building technology that lets us:
- Avert AI catastrophes by hardening the world against them, both because it is good in itself and because it removes the security threats that drive calls for centralization.
- Diffuse AI, to get it in the hands of regular people. In the short-term, build AI that augments human capabilities. In the long-term, align AI directly to individual users and give everyone control in the AI economy.
- Democratize institutions, making them more anchored to the needs of humans even as they are buffeted by the changing incentive landscape and fast-moving events of the AGI transition.
In this series of essays, we examine the incoming crisis of human irrelevance and provide a map towards a future where people remain the masters of their destiny.
Chapters
1. Introduction
We will soon live in the intelligence age. What you do with that information will determine your place in history.
2. Pyramid Replacement
Increasingly powerful AI will trigger pyramid replacement: a systematic hollowing out of corporate structures that starts with entry-level hiring freezes and moves upward through waves of layoffs.
3. Capital, AGI, and Human Ambition
AI will make non-human factors of production more important than human ones. The result may be a future where today's power structures become permanent and frozen, with no remaining pathways for social mobility or progress.
4. Defining the Intelligence Curse
With AGI, powerful actors will lose their incentive to invest in regular people–just as resource-rich states today neglect their citizens because their wealth comes from natural resources rather than taxing human labor. This is the intelligence curse.
5. Shaping the Social Contract
The intelligence curse will break the core social contract. While this suggests a grim future, understanding how economic incentives reshape societies points to a solution: we can deliberately develop technologies that keep humans relevant.
6. Breaking the Intelligence Curse
Avert AI catastrophes with technology for safety and hardening without requiring centralizing control. Diffuse AI that differentially augments rather than automates humans and decentralizes power. Democratize institutions, bringing them closer to regular people as AI grows more powerful.
7. History is Yours to Write
You have a roadmap to break the intelligence curse. What will you do with it?
2 comments
Comments sorted by top scores.
comment by Simon Lermen (dalasnoin) · 2025-04-24T17:37:19.576Z · LW(p) · GW(p)
I don’t believe the standard story of the resource curse. I also don’t think Norway and the Congo are useful examples, because they differ in too many other ways. According to o3, “Norway avoided the resource curse through strong institutions and transparent resource management, while the Congo faced challenges due to weak governance and corruption.” To me this is a case of where existing AI models still fall short: the textbook story leaves out key factors and never comes close to proving that good institutions alone prevented the resource curse.
Regarding the main content, I find the scenario implausible. The “social-freeze and mass-unemployment” narrative seems to assume that AI progress will halt exactly at the point where AI can do every job but is still somehow not dangerous. You also appear to assume a new stable state in which a handful of actors control AGIs that are all roughly at the same level.
More directly, full automation of the economy would mean that AI can perform every task in companies already capable of creating military, chemical, or biological threats. If the entire economy is automated, AI must already be dangerously capable.
I expect reality to be much more dynamic, with many parties simultaneously pushing for ever-smarter AI while understanding very little about its internals. Human intelligence is nowhere near the maximum, and far more dangerous intelligence is possible. Many major labs now treat recursive self-improvement as the default path. I expect that approaching superintelligence without any deeper understanding of the internal cognition this way will give us systems that we cannot control and that will get rid of us. For these reasons, I have trouble worrying about job replacement. You also seem to avoid mentioning the extinction risk in this text.
Replies from: LRudL↑ comment by L Rudolf L (LRudL) · 2025-04-24T21:28:43.741Z · LW(p) · GW(p)
I don’t believe the standard story of the resource curse.
What do you think is the correct story for the resource curse?
I find the scenario implausible.
This is not a scenario, it is a class of concerns about the balance of power and economic misalignment that we expect to be a force in many specific scenarios. My actual scenario is here [LW · GW].
The “social-freeze and mass-unemployment” narrative seems to assume that AI progress will halt exactly at the point where AI can do every job but is still somehow not dangerous.
We do not assume AI progress halts at that point. We say several times that we expect AIs to keep improving. They will take the jobs, and they will keep on improving beyond that. The jobs do not come back if the AI gets even smarter. We also have an entire section dedicated to mitigating the risks of AIs that are dangerous, because we believe that is a real and important threat.
More directly, full automation of the economy would mean that AI can perform every task in companies already capable of creating military, chemical, or biological threats. If the entire economy is automated, AI must already be dangerously capable.
Exactly!
I expect reality to be much more dynamic, with many parties simultaneously pushing for ever-smarter AI while understanding very little about its internals.
"Reality will be dynamic, with many parties simultaneously pushing for ever-smarter AI [and their own power & benefit] while understanding very little about [AI] internals [or long-term societal consequences]" is something I think we both agree with.
I expect that approaching superintelligence without any deeper understanding of the internal cognition this way will give us systems that we cannot control and that will get rid of us. For these reasons, I have trouble worrying about job replacement.
If we hit misaligned superintelligence in 2027 and all die as a result, then job replacement, long-run trends of gradual disempowerment, and the increased chances of human coup risks indeed do not come to pass. However, if we don't hit misaligned superintelligence immediately, and instead some humans pull a coup with the AIs, or the advanced AIs obsolete humans very quickly (very plausible if you think AI progress will be fast!) and the world is now states battling against each other with increasingly dangerous AIs while feeling little need to care for collateral damage to humans, then it sure will have been a low dignity move from humanity if literally no one worked on those threat models!
You also seem to avoid mentioning the extinction risk in this text.
The audience is primarily not LessWrong, and the arguments for working on alignment & hardening go through based on merely catastrophic risks (which we do mention many times). Also, the series is already enough of an everything-bagel as it is.