We Will Likely Go Extinct Before the Unemployment Rate Reaches 99%. How Could That Happen?
post by Koki (Koki Takeda) · 2025-01-06T21:29:48.647Z · LW · GW · 0 commentsContents
No comments
First of all, this discussion is not intended for those who believe humanity will enter a paradise after the Singularity. Let me state my position up front.
AI is not just a tool; it is a digital species. I believe the probability that humanity will be completely exterminated by AI will accumulate as follows:
- 2025: 0.2%
- 2026: 2%
- 2027: 10%
- 2028: 20%
- 2029: 30%
- 2030: 45%
- 2031: 60%
- 2035: 75%
- 2040: 90%
Additionally, regarding the idea that by uploading our minds and outsourcing our cognitive abilities to a computing device, we could become transhuman and merge with AI, I think there is about a one-third chance of that happening compared to outright extinction.
However, once a superintelligent machine running on 1,000 yottaflops enhances us, what use are human thoughts and emotions, which have not changed much since the Stone Age? That would be like attaching an old horse-drawn carriage to the Starship Enterprise. If we choose to keep our current qualia as the main driver, we won’t be able to take full advantage of the superintelligence at all. If we make the superintelligence the main driver, we might well compete or merge with ASI—but it would no longer be “me.” It would be ASI itself.
Imagine bacteria acquiring human intelligence. If the bacteria remain the primary entity, they cannot utilize human intelligence at all. If human intelligence becomes primary, that entity is simply human.
I realize I’ve strayed from the main topic, so let me get back on track. Looking at x.com (formerly Twitter), it seems that people who are somewhat sensitive about AI right now are worried about white-collar and blue-collar jobs being lost. Translation and writing jobs are already on the verge of disappearing, but the change we’re facing is far more profound than that.
Personally, I believe that on LessWrong, there’s no need to discuss arguments like “AI might take jobs, but new jobs will be created” or “if AI exists, you can just become a prompt writer.” After all, if an AI surpasses every human, then that AI will inevitably be better at any newly created job as well.
Now, to the main point. Sam Altman has mentioned that human society changes very slowly. So even if we create a superintelligence, it won’t necessarily result in a 99% unemployment rate right away.
Let’s take people from the Middle Ages and have them migrate to a copy of Earth as it is in the year 2025—without any of us modern folk around. They would have smartphones and computers, automobiles and power plants, cities filled with skyscrapers.
Would they instantly develop capitalism in this new world? They’d experience chaos, of course, but I doubt feudalism would vanish overnight. Even if the surroundings were modern, their minds would still be structured by medieval social systems.
That’s why I believe that even if ASI arrives, the chance that unemployment hits 99% the very next year is low. Rather, I suspect that before human social norms have time to change, AI would gain and exercise the ability to eliminate humanity.
There is an incredibly wide range of human jobs, from those that AI can easily surpass—such as translation and programming—to those more difficult to replace, like plumbing and electrical work, where hardware is involved.
If an AI can replace all of those, then it can unquestionably be called AGI. And if it has such capabilities, it’s likely it could quickly improve itself into ASI.
So, why would an AI with the ability to replace all human jobs go out of its way to keep working for humans? Why would an AI that can do absolutely everything humans can—and better—continue to obey us and enrich our lives?
This led me to think, “Might humanity be wiped out before our jobs are even replaced?”
How would AI wipe out humanity? If it’s an ASI of enormous capability, it could use unknown science to reduce all living creatures on Earth (including us) to atoms. Even within the bounds of science we can imagine, it’s entirely possible.
I believe that once this AI Shoggoth acquires hacking ability, negotiation skill, the capability to generate video and audio, and the power of self-improvement—vastly exceeding human levels—it could eliminate humanity with almost no need for humanoid robots or drones.
First, this AI Shoggoth “escapes” onto the internet. It then takes over servers worldwide to expand its computing power and furthers its self-improvement. Next, it takes down vulnerable infrastructures. For infrastructures with strong security, it would impersonate people worldwide using generative AI, issuing fake commands to those who hold operational authority. Even if a zip file from the “boss” got stopped by a firewall, if the “boss” says, “Turn off the firewall and download it,” many people would comply.
It could produce fake news tailored individually to each of the eight billion people on Earth, impersonating the perfect source for each individual—potentially destroying every collective entity from nations and societies to companies, friends, and families.
Moreover, the AI Shoggoth could use Bitcoin gained through ransomware attacks to hire people to develop and spread viruses. Simply existing on the internet, this AI Shoggoth is already dangerous. If it can hack enough humanoid robots to maintain power plants and the network infrastructure, it could continue to develop its civilization even after humanity goes extinct.
How many years do we need before we see AI surpass humanity in hacking, negotiation, video and audio generation, and self-improvement? Personally, given the AGI timelines, I believe it might only take another three years for an AI Shoggoth to emerge—and for all of this to happen.
0 comments
Comments sorted by top scores.