Can someone, anyone, make superintelligence a more concrete concept?
post by Ori Nagel (ori-nagel) · 2025-01-30T23:25:36.135Z · LW · GW · 3 commentsContents
The future is not unfathomable We can learn new fears The appropriate emotional response Some examples that resonate, but why they're inadequate A note on bias A communications challenge? None 3 comments
What especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response. - Sam Harris (NPR, 2017)
I've been thinking alot about why so many in the public don't care much about the loss of control risk posed by artificial superintelligence, and I believe a big reason is that our (or at least my) feeble mind falls short at grasping the concept. A concrete notion of human intelligence is a genious, like Einstein. What is the concrete notion of artificial superintelligence?
If you can make that feel real and present, I believe I, and others, can better respond to the risk.
The future is not unfathomable
When people discuss the singularity, projection projections beyond that point often become "unfathomable." They resemble the form of: it will cleverly outmaneuver any idea we have, it will have its way with us, what happens next is TBD.
I reject much of this, because we can see low-hanging fruit all around us for a greater intelligence. A simple example is the top speed of aircraft. If a rough upper limit for the speed of an object is the speed of light in air, ~299,700 km/s, and one of the fastest aircraft, NASA X-43 , has a speed of 3.27 km/s then we see there's a lot of room for improvement. Certainly a superior intelligence could engineer a faster one! Another engineering problem waiting to be seized: there's plenty of zero-day hacking exploits waiting to be uncovered with intelligence attention towards them.
Thus, the "unfathomable" future is foreseeable to a degree, like we know that engineerable things could be engineered by a superior intelligence. Perhaps they will want things that offer resources, like the rewards of successful hacks.
We can learn new fears
We are born with some innate fears, but many fears are learned. We learn to fear a gun because it makes a harmful explosion. We learn to fear a dog after it bites us.
Some things we should learn to fear are just not observable with raw senses, like the spread of a flammable gas inside our homes. So a noxious scent gets added to make the risk observable at home, and allow us to react appropriately. I've heard many logical arguments about superintelligence risk, but my contention is that these arguments don't convey the adequate emotional message. If your argument does nothing for my emotions, then it exists like a threatening but odorless gas—one that I fail to avoid because I don't readily detect it—so can you spice it up so that I understand on an emotional level the risk and requisite actions to take? I don't think that requires invoking esoteric science-fiction, because...
Another power we humans have is the ability to conjure up a fear that is not present. Consider this simple thought experiment: First, envision yourself in a zoo watching lions. What's the fear level? Now envision yourself while placed inside the actual lion enclosure and the resulting fear level. Now envision a lion sprinting towards you while you're in the enclosure. Time to ruuunn!
Isn't the pleasure of any media, really, how it stirs your emotions?
So why can't someone walk me through the argument that makes me feel the risk of artificial superintelligence without requiring a reading a long tome, or getting transported to an exotic world of science-fiction?
The appropriate emotional response
Sam Harris has said, "What especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response." As a student of the discourse, I believe that's true for most.
I've gotten flack for saying this, but having watched many, many hours of experts discussing the existential risk of AI, I see very few who express what I view as an appropriate emotional response. I see frustration and the emotions of partisanship, but these exist with everything that becomes a political issue.
To make things concrete, when I hear people discuss present job loss from AI or fears of job loss from AI, the emotions square more closely with my expectations. I do not criticize these folks so much. There's sadness from those impacted and a palpable anger from those trying to protect themselves. Artists are rallying behind copyright protections, and I'd argue it comes partially out of a sense of fear and injustice regarding the impact of AI on their livelihood. I've been around illness, death, grieving. I've experienced loss. I find the expressions about AI and job loss congruent with my expectations.
I think a huge, huge reason for the logic/emotion gap when it comes to the existential threat of artificial superintelligence is because the concept we're referring to is so poorly articulated. How can one address on an emotional level a "limitlessly-better-than-you'll-ever-be" entity in a future that's often regarded as unfathomable?
People drop their 'pdoom' or dully express short-term "extinction" risk timelines (which isn't easily relatable on an emotional level), deep technical tangents on one AI programming technique vs another. I'm sorry to say but I find these expressions so poorly calibrated emotionally with the actual meanings of what they're discussing.
Some examples that resonate, but why they're inadequate
Here are some of the best examples I've heard that try address the challenges I've outlined.
When Yudkowsky talks about markets or Stockfish, he mentions how our existence in relation to them involves a sort of deference. While those are good depictions of the sense of powerlessness/ignorance/acceptance towards a greater force, they are lacking because they are narrow. Asking me, the listener, to generalize a market or Stockfish to every action is a step too far, laughably unrealistic. And that's not even me being judgmental, I think the exaggeration is so extreme that laughing is common! An easy rationalization is also to liken these to tools, and a tool like a calculator isn't so harmful.
What also provokes fear for me are discussions of misuse risks, like the idea of a bad actor getting a huge amount of computing or robotics power to enable them to control our devices and police the public with surveillance and drones and such. But this example is lacking because it doesn't describe loss of control, and it also centers on preventing other humans from getting a very powerful tool. I think this example is part of the narrative fueling the AI arms race, because it suggests that a good actor has to get the power first to suppress misuse by bad actors. To be sure, it is a risk worth fearing and trying to mitigate, but...
Where is such a description of loss of control?
A note on bias
I suspect that the inability to emotionally relate to supreintelligence is aided by a few biases: hubris and denial. I think it's common to feel a sort of hubris, like, if one loses a competition they tell themselves: "Perhaps I lost in that domain but I'm still best at XYZ and if I trained more I'd win."
There's also a natural denial of death. We inch closer to it everyday, but few actually think about it and it's a difficult concept to accept even for those who have terminal disease.
So, if one is reluctant to accept that something else is "better" than them out of hubris and reluctant to accept that death is possible out of denial, well that helps explain why superintelligence is also such a difficult concept to grasp.
A communications challenge?
So, please, can you make the concept of artificial superintelligence more concrete? Do your words arouse in a reader like me a fear on par with being trapped in a lion's den, without pointing towards a massive tome or asking me to invest in watching an entire Netflix series? If so, I think you'll be communicating in a way I've yet to see in the discourse. I'll respond in the comments to tell you why your example did or didn't register on an emotional level for me.
3 comments
Comments sorted by top scores.
comment by Anon User (anon-user) · 2025-01-31T00:01:52.214Z · LW(p) · GW(p)
I found Eliezer Yudkowsky's "blinking stars" story (That Alien Message — https://search.app/uYn3eZxMEi5FWZEw5) persuasive. That story also has a second layer of having the entra smart Earth with better functioning institutions, but at the level of intuition you are going for it is probably unnecessary and would detract from the message. I think imagining a NASA-like organisation dedicated to controlling a remote robot at say 1 cycle of control loop per month (where it is perhaps corresponding to 1/30 of a second for the aliens), showing how totally screwed up the aliens are in this scenario, then flipping it around, should be at least somewhat emotionally persuasive.
Replies from: ori-nagel↑ comment by Ori Nagel (ori-nagel) · 2025-01-31T00:40:14.131Z · LW(p) · GW(p)
Ah yes, Rational Animations did a great video of that story. That did make superintelligence more graspable, but you know I had watched it and forgotten about it. I think it showed how our human civilization is vulnerable to other intelligences (aliens), but didn't still made the superintelligence concept one that that easy to grok.
comment by Ben Livengood (ben-livengood) · 2025-01-31T02:17:45.399Z · LW(p) · GW(p)
I think people have a lot of trouble envisioning or imagining what the end of humanity and our ecosystem would be like. We have disaster movies; many of them almost end humanity and leave some spark of hope or perspective at the end. Instead, imagine any disaster movie scenario where it ends somewhere before that moment and instead there's just a dead, empty planet left to be disassembled or abandoned. The perspective is that history and ecology have been stripped away from the ball of rock without a trace remaining because none of it mattered enough to a superintelligence to preserve even a record of it. Emotionally, it should feel like burning treasured family photographs and keepsakes.