Cast it into the fire! Destroy it!
post by Aram Panasenco (panasenco) · 2025-01-13T07:30:19.356Z · LW · GW · 1 commentsContents
1 comment
We should only use AGI once to make it so that no one, including ourselves, can use it ever again.
I'm terrified of both getting atomized by nanobots [LW · GW] and of my sense of morality disintegrating in Extremistan [LW · GW]. We don't need AGI to create a post-scarcity society, cure cancer, solve climate change, build a Dyson sphere, colonize the galaxy, or any of the other sane things we're planning to use AGI for. It will take us hard work and time, but we can get there with the power of our own minds. In fact, we need that time to let our sense of morality adjust to our ever-changing reality. Even without AGI, most people already feel that technological progress is too fast for them to keep up.
Some of the greatest thinkers and writers of humanity have warned us of the danger and seductiveness of unlimited power. Take this passage from Tolkien and tell me it doesn't sound like most of the people you've heard talk about the wonderful things they're planning to do with AGI:
Already the Ring tempted him, gnawing at his will and reason. Wild fantasies arose in his mind; and he saw Samwise the Strong, Hero of the Age, striding with a flaming sword across the darkened land, and armies flocking to his call as he marched to the overthrow of Barad-dir. And then all the clouds rolled away, and the white sun shone, and at his command the vale of Gorgoroth became a garden of flowers and trees and brought forth fruit. He had only to put on the Ring and claim it for his own, and all this could be.
Lovecraft warned us of what would happen when our abilities outpaced our morality, when we ourselves would become powerful like cosmic horrors:
The time would be easy to know, for then mankind would have become as the Great Old Ones; free and wild and beyond good and evil, with laws and morals thrown aside and all men shouting and killing and revelling in joy. Then the liberated Old Ones would teach them new ways to shout and kill and revel and enjoy themselves, and all the earth would flame with a holocaust of ecstasy and freedom.
Humanity has nothing to gain from AGI, and everything to lose. We don't need AGI to have human values or to follow instructions in a friendly manner. We just need to figure out that one command to seal off that power forever - without disassembling ourselves in the process.
If Geoffrey Hinton, Elizier Yudkowsky, and other top AI researchers are wrong about the power and dangers of AGI, then the AGI will probably be incapable of following the command to the extent we imagine anyway.
On the other hand, if those researchers are right, only then will humanity understand the depth of the precipice upon which it stood. It's one thing to listen to experts talk about hypothetical future dangers, another to see hundred-billion-dollar distributed computers inexplicably turned into paperweights. Few will be able to deny the reach of the power and of the danger then. Humanity will survive, and hopefully recognize that there really are "seas of black infinity" out there that we may never be ready to touch.
If you, dear reader, have a chance of being that first person to give a command to a superintelligence, don't be an Isildur. Unlimited power won't do any good for you or for anyone else, and it was not meant for us to bear. If you can, seal that power and free humanity from the fear of eternal death and eternal nightmare.
Of course, making an AGI make sure AGI never gets used again is easier said than done, and even this seemingly simple problem seems to be on the same order of difficulty as alignment in general, and just as likely to get us all disassembled if we screw it up. Still, this is the problem AI alignment researchers should be focused on.
One silver lining here is that there's a possibility that we may be within the light cone of an alien civilization that actually got this right, so their "anti-AGI AGI" is here in our solar system, and we'll just get to laugh as Microsoft admits that it can't turn Stargate on and then go on to live our normal lives.
1 comments
Comments sorted by top scores.
comment by cousin_it · 2025-01-13T08:38:25.480Z · LW(p) · GW(p)
What about biological augmentation of intelligence? I think if other avenues are closed, this one can still go pretty far and make things just as weird and risky. You can imagine biological self-improving intelligences too.
So if you're serious about closing all avenues, it amounts to creating a god that will forever watch over everything and prevent things from becoming too smart. It doesn't seem like such a good idea anymore.