Can someone, anyone, make superintelligence a more concrete concept?
post by Ori Nagel (ori-nagel) · 2025-02-04T02:18:51.718Z · LW · GW · 5 commentsContents
The future is not unfathomable We can learn new fears The appropriate emotional response Some examples that resonate, but why they're inadequate A note on bias A communications challenge? None 5 comments
What especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response. - Sam Harris (NPR, 2017)
I've been thinking alot about why so many in the public don't care much about the loss of control risk posed by artificial superintelligence, and I believe a big reason is that our (or at least my) feeble mind falls short at grasping the concept. A concrete notion of human intelligence is a genious, like Einstein. What is the concrete notion of artificial superintelligence?
If you can make that feel real and present, I believe I, and others, can better respond to the risk.
The future is not unfathomable
When people discuss the singularity, projection projections beyond that point often become "unfathomable." They resemble the form of: it will cleverly outmaneuver any idea we have, it will have its way with us, what happens next is TBD.
I reject much of this, because we can see low-hanging fruit all around us for a greater intelligence. A simple example is the top speed of aircraft. If a rough upper limit for the speed of an object is the speed of light in air, ~299,700 km/s, and one of the fastest aircraft, NASA X-43 , has a speed of 3.27 km/s then we see there's a lot of room for improvement. Certainly a superior intelligence could engineer a faster one! Another engineering problem waiting to be seized: there's plenty of zero-day hacking exploits waiting to be uncovered with intelligence attention towards them.
Thus, the "unfathomable" future is foreseeable to a degree, like we know that engineerable things could be engineered by a superior intelligence. Perhaps they will want things that offer resources, like the rewards of successful hacks.
We can learn new fears
We are born with some innate fears, but many fears are learned. We learn to fear a gun because it makes a harmful explosion. We learn to fear a dog after it bites us.
Some things we should learn to fear are just not observable with raw senses, like the spread of a flammable gas inside our homes. So a noxious scent gets added to make the risk observable at home, and allow us to react appropriately. I've heard many logical arguments about superintelligence risk, but my contention is that these arguments don't convey the adequate emotional message. If your argument does nothing for my emotions, then it exists like a threatening but odorless gas—one that I fail to avoid because I don't readily detect it—so can you spice it up so that I understand on an emotional level the risk and requisite actions to take? I don't think that requires invoking esoteric science-fiction, because...
Another power we humans have is the ability to conjure up a fear that is not present. Consider this simple thought experiment: First, envision yourself in a zoo watching lions. What's the fear level? Now envision yourself while placed inside the actual lion enclosure and the resulting fear level. Now envision a lion sprinting towards you while you're in the enclosure. Time to ruuunn!
Isn't the pleasure of any media, really, how it stirs your emotions?
So why can't someone walk me through the argument that makes me feel the risk of artificial superintelligence without requiring a reading a long tome, or getting transported to an exotic world of science-fiction?
The appropriate emotional response
Sam Harris has said, "What especially worries me about artificial intelligence is that I'm freaked out by my inability to marshal the appropriate emotional response." As a student of the discourse, I believe that's true for most.
I've gotten flack for saying this, but having watched many, many hours of experts discussing the existential risk of AI, I see very few who express what I view as an appropriate emotional response. I see frustration and the emotions of partisanship, but these exist with everything that becomes a political issue.
To make things concrete, when I hear people discuss present job loss from AI or fears of job loss from AI, the emotions square more closely with my expectations. I do not criticize these folks so much. There's sadness from those impacted and a palpable anger from those trying to protect themselves. Artists are rallying behind copyright protections, and I'd argue it comes partially out of a sense of fear and injustice regarding the impact of AI on their livelihood. I've been around illness, death, grieving. I've experienced loss. I find the expressions about AI and job loss congruent with my expectations.
I think a huge, huge reason for the logic/emotion gap when it comes to the existential threat of artificial superintelligence is because the concept we're referring to is so poorly articulated. How can one address on an emotional level a "limitlessly-better-than-you'll-ever-be" entity in a future that's often regarded as unfathomable?
People drop their 'pdoom' or dully express short-term "extinction" risk timelines (which isn't easily relatable on an emotional level), deep technical tangents on one AI programming technique vs another. I'm sorry to say but I find these expressions so poorly calibrated emotionally with the actual meanings of what they're discussing.
Some examples that resonate, but why they're inadequate
Here are some of the best examples I've heard that try address the challenges I've outlined.
When Yudkowsky talks about markets or Stockfish, he mentions how our existence in relation to them involves a sort of deference. While those are good depictions of the sense of powerlessness/ignorance/acceptance towards a greater force, they are lacking because they are narrow. Asking me, the listener, to generalize a market or Stockfish to every action is a step too far, laughably unrealistic. And that's not even me being judgmental, I think the exaggeration is so extreme that laughing is common! An easy rationalization is also to liken these to tools, and a tool like a calculator isn't so harmful.
What also provokes fear for me are discussions of misuse risks, like the idea of a bad actor getting a huge amount of computing or robotics power to enable them to control our devices and police the public with surveillance and drones and such. But this example is lacking because it doesn't describe loss of control, and it also centers on preventing other humans from getting a very powerful tool. I think this example is part of the narrative fueling the AI arms race, because it suggests that a good actor has to get the power first to suppress misuse by bad actors. To be sure, it is a risk worth fearing and trying to mitigate, but...
Where is such a description of loss of control?
A note on bias
I suspect that the inability to emotionally relate to supreintelligence is aided by a few biases: hubris and denial. I think it's common to feel a sort of hubris, like, if one loses a competition they tell themselves: "Perhaps I lost in that domain but I'm still best at XYZ and if I trained more I'd win."
There's also a natural denial of death. We inch closer to it everyday, but few actually think about it and it's a difficult concept to accept even for those who have terminal disease.
So, if one is reluctant to accept that something else is "better" than them out of hubris and reluctant to accept that death is possible out of denial, well that helps explain why superintelligence is also such a difficult concept to grasp.
A communications challenge?
So, please, can you make the concept of artificial superintelligence more concrete? Do your words arouse in a reader like me a fear on par with being trapped in a lion's den, without pointing towards a massive tome or asking me to invest in watching an entire Netflix series? If so, I think you'll be communicating in a way I've yet to see in the discourse. I'll respond in the comments to tell you why your example did or didn't register on an emotional level for me.
NOTE: I posted this to LW and I'm new here so I don't totally know the cross-posting policies. Hope it's alright that I posted here too!
5 comments
Comments sorted by top scores.
comment by Morpheus · 2025-02-04T09:09:28.680Z · LW(p) · GW(p)
NOTE: I posted this to LW and I'm new here so I don't totally know the cross-posting policies. Hope it's alright that I posted here too!
It seems you posted on LW twice instead or in addition to cross-posting to the EA forum.
Replies from: ori-nagel↑ comment by Ori Nagel (ori-nagel) · 2025-02-13T03:29:05.981Z · LW(p) · GW(p)
Whoops! I definitely posted this second one to Alignment Forum but I guess it got cross posted back to LW.
comment by Odd anon · 2025-02-04T10:22:28.802Z · LW(p) · GW(p)
Strategies:
- Analogy by weaker-than-us entities: What does human civilization's unstoppable absolute conquest of Earth look like to a gorilla? What does an adult's manipulation look like to a toddler failing to understand how the adult keeps knowing things that were secret, keeps being able to direct one's actions in ways that can only be noticed in retrospect if at all?
- Analogy by stronger-than-us entities: Superintelligence is to Mossad as Mossad is to you, and able to work in parallel and faster. One million super-Mossads, who have also developed the ability to slow down time for themselves, all intent to kill you through online actions alone? That may trigger some emotional response.
- Analogy by fictional example: The webcomic "Seed" featured a nascent moderately-superhuman intelligence, which frequently used a lot of low-hanging social engineering techniques, each of which only have their impact shown after the fact. It's, ah, certainly fear-inspiring, though I don't know if it meets the "without pointing towards a massive tome" criterion. (Unfortunately, actually super-smart entities are quite rare in fiction.)
↑ comment by Ori Nagel (ori-nagel) · 2025-02-13T03:35:28.335Z · LW(p) · GW(p)
Really appreciate this response, I think you nailed it! A general superintelligence is unseeable so you have to use one of those analogies.
comment by Mordechai Rorvig (mordechai-rorvig) · 2025-02-05T01:34:43.865Z · LW(p) · GW(p)
Honestly, I think the journalism project I was working on in the last year may be most important for the way it sheds light on your question.
The purpose of the project, to be as concise as possible, was to investigate the evidence from neuroscience that there may be a strong analogy between modern AI programs, like language models, and distinctive subregions of the brain, like the so-called language network, responsible for our abilities to process and generate language.
As such, if you imagine coupling such a frontier language model with a crude agent architecture, then what you will wind up with might be best viewed as a form of hyperintelligent machine sociopath, with all the extremely powerful language machinery of a human—perhaps even much more powerful, considering its inhuman scaling—but none of the machinery necessary for say, emotional processing, empathy, and emotional reasoning—aside from the superficial facsimile of such, that you get from a mastery of human language. (For example, existing models lack any and all machinery corresponding to the human dopamine and serotonin systems.)
This is, for me, a frankly terrifying realization that I am still trying to wrap my head around, and plan to be posting about more, soon. Does this help at all?