0 comments
Comments sorted by top scores.
comment by jbash · 2025-01-20T13:37:28.693Z · LW(p) · GW(p)
As a final note: the term "Butlerian Jihad" is taken from Dune and describes the shunning of "thinking machines" by mankind.
In Dune, "thinking machines" are shunned because of a very longstanding taboo that was pretty clearly established in part by a huge, very bloody war. The intent was to make that taboo permanent, not a "pause", and it more or less succeeded in that.
It's a horrible metaphor and I strongly suggest people stop using it.
the Culture ending, where CEV (or similar) aligned, good ASI is created and brings us to some hypothetical utopia. Humanity enjoys a rich life in some manner compatible with your personal morals.
santa claus to 11 ending: ASI solves our problems and human development stagnates; ASI goes on to do its own thing without killing humans - but without human influence on the lightcone
Um, humans in the Culture have no significant influence on the lightcone (other than maybe as non-agentic "butterfly wings"). The Minds decide what's going to happen. Humans opposed to that will be convinced (often via manipulation subtle enough that they don't even know about it) or ignored. Banks struggled to even find reasons to write stories about the humans, and sometimes had to cheat to do so.
I have come to accept that some people have an attachment to the whole "human influence" thing, but how can you believe that simultaneously say the Culture is a good outcome?
Replies from: waterlubber↑ comment by waterlubber · 2025-01-20T16:18:44.223Z · LW(p) · GW(p)
The intent was to make that taboo permanent, not a "pause", and it more or less succeeded in that.
I would not be opposed to a society stalled at 2016 level AI/computing that held that level indefinitely. Progress can certainly continue without AGI via e.g human intelligence enhancement or just sending our best and brightest to work directly on our problems instead of on zero-sum marketing or AI efforts.
Um, humans in the Culture have no significant influence on the lightcone (other than maybe as non-agentic "butterfly wings"). The Minds decide what's going to happen
Humans were still free to leave the Culture, however; not all of the lightcone was given to the AI. Were we to develop aligned ASI, it would be wise to slice off a chunk of the lightcone for humans to work on "on their own."
I don't think the Culture is an ideal outcome, either, merely a "good" one that many people would be familiar with. "Uplifting" humans rather than developing replacements for them will likely lead us down a better path, although the moral alignment shift in whatever the uplifting process is might limit its utility.
comment by AaronF (aaron-franklin-esq) · 2025-01-20T22:00:20.650Z · LW(p) · GW(p)
"Likewise, risks from competing nation states (e.g China) could be mitigated via existing intentional collaboration strategies - nuclear proliferation management techniques like inspections & intelligence agencies keeping check on each other could feasibly serve as a means for the world to prevent the development of AI. "
This is a word salad that has zero empirical or theoretical foundation. Gunpowder, greenhouse gases, virus pathology and many other fields have shown this to be empirically false. We'd all be better off is there wasn't arms races and runaway selection (though would we have evolved in the first place?), but denying this fact gets us nowhere.
↑ comment by waterlubber · 2025-01-20T23:27:35.065Z · LW(p) · GW(p)
Counterpoints: leaded gasoline, Montreal protocol, and human genome editing. I don't think it's possible to completely eliminate development but an OOM reduction in the rate of deployment or spread is easily achievable.
Replies from: aaron-franklin-esq↑ comment by AaronF (aaron-franklin-esq) · 2025-01-21T00:11:40.044Z · LW(p) · GW(p)
Easily? Those weren't arms races; and I'd argue that genetic engineering issue is completely based on the inherent limitation and difficulty of the technology; Not an outside agreement to cull the arms-race. Leaded gasoline would harm the individual nations, even without an agreement.
Is there any agreement where a country has agreed to cutting their own horns? (Could argue Russia-US missile agreement; which has been a strategic disaster re China, though it is in quantity not a quality agreement).
I think your argument about the impact and ability of AI is exactly the reason your agreement would never work, never mind that enforcement would be nearly impossible (I doubt the LLM's are constricted by GPU's). You are trying to have it both ways, that AI would give a country a decisive and massive edge in development, but it won't take it because of an agreement? And do so easily? And no other country will defect? (Even NK defected with nukes).
I wish I had your optimism about human nature or animal life in general; you are trying to modify evolution.
comment by Robert Cousineau (robert-cousineau) · 2025-01-20T07:07:35.674Z · LW(p) · GW(p)
I mostly agree with the body of this post, and think your calls to action make sense.
On your title and final note: Butlerian Jihad feels out of place. It’s catchy, but it seems like you are recommending AI concerned people more or less do what AI concerned people already do. I feel like we should save our ability to use words that are a call to arms for a time when that is what we are doing.
↑ comment by waterlubber · 2025-01-20T16:40:19.286Z · LW(p) · GW(p)
I think AI concerned people mostly take their concerns about AI to somewhat insular communities: Twitter, et. al. Wealthy interest groups are going to dominate internet communication and I don't think that's an effective way to drum up concern or coordination. The vast majority of the working-class, average Joe, everyday population is going to be opposed to AI for one reason or another; I think AI interest groups developing a united front specifically for this people would be the most effective technique.
(one way to accomplish this might be to spread these "quietly" among online sympathizers before "deploying" them to the public? I'm sure this sort of grassroots PR campaign has been solved and analyzed to death already)
comment by Richard_Kennaway · 2025-01-20T15:32:39.921Z · LW(p) · GW(p)
We find ourselves at the precipice of
tAI;dr.
Replies from: waterlubber↑ comment by waterlubber · 2025-01-20T16:12:56.805Z · LW(p) · GW(p)
This entire article was written solely by me without the "assistance" of any language models.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2025-01-20T16:29:02.988Z · LW(p) · GW(p)
Ok, I'll take your word for it. It was still the most clichéd possible opening.
Replies from: waterlubber↑ comment by waterlubber · 2025-01-20T16:40:52.759Z · LW(p) · GW(p)
Fair.
comment by Foyle (robert-lynn) · 2025-01-20T07:26:17.498Z · LW(p) · GW(p)
The phone seems to be off the hook for most of public on AI danger, perhaps a symptom of burnout from numerous other scientific Millenialist scares - people have been hearing of imminent dangers of catastrophe for decades that have failed to impact the lives of 95%+ of population in any significant way and now just write it all off as more of the same.
I am sure that most LW readers find little in the way of positive reception for our concerns amongst less technologically engaged family members and acquaintances. There are just too many comforting techno-utopian narratives that we are still having to compete with informed by the superficially positive representations in movies and TV, and and most people bias towards optimism in relatively good/comfortable times like these. We are dealing with emotional reactions and the sheeply 'vibe' of population rather than thoughtfully considered positions.
But I am pretty sure that will all change as soon as we see significant AI competition/undercutting for white collar professions. Those affected will quickly start baying for blood, and the electorally dominant empathic response to those sad stories of the economically impacted will rapidly swing the more emotively governed wider population against AI. OECD Democratic govts will inevitably then move to ban AI taking jobs of particularly politically protected classes of people (might still leave some niches vulnerable - like medicine where there is always a shortage of service supply causing ridiculously high prices, and perhaps tech for vengeful reasons). It will be Butlerian Jihad lite, aimed at symptoms rather than causes, and will likely buy us a few years of relative normalcy as more dangerous ASI is developed in govt approved labs and by despotic regimes.
I doubt it will save us in 50 year time frame, but will perhaps make the economic disruption less severe for 5-10 years.
The way to have a bigger impact in the shorter term would be to buy AI-danger editorial support from influencers, particularly those with young female audiences that form the core of environmental and other popular protest movements. They are by temperament the easiest to bring on board, and have outsized poltical influence.
Replies from: waterlubber↑ comment by waterlubber · 2025-01-20T16:42:20.012Z · LW(p) · GW(p)
I think the support/belief of "AI bad" is widespread, but people don't have a clear goal to rally behind. People want to support something, but give a resigned "what am I to do?"
If there's a strong cause with a clear chance of helping (i.e a "don't build AI or advance computer semiconductors for the next 50 years" guild) people will rally behind it.