[Linkpost] AI War seems unlikely to prevent AI Doom

post by thenoviceoof · 2025-04-25T20:44:48.267Z · LW · GW · 3 comments

This is a link post for https://thenoviceoof.com/ai/war/ai_war_v1_0.html

Contents

3 comments

This is a linkpost: link to post.

Originally this was part of a much larger work. However, I realized that I don't think I've seen the specific argument around GPU salvage spelled out. Since I'm also using a new writing format, I figured I could get feedback on both the content and format at the same time.

That said, despite the plausible novelty of one of the arguments I don't think it will be especially interesting to LW, since it is based on oddly specific assumptions: this makes more sense in the context of a broad AI risk argument. It also feels kind of obvious?

The format is the interesting bit. For motivation, sometimes people have opposing reactions to AI risk arguments:

(This might be a general feature for contentious topics.)

This format focuses on presenting the high level skeleton of an argument, and then allowing readers to drill in where they want. If a reader agrees with many high level points (but not all of them), they can spend their time reading specifically where we disagree.

So! Do you have any feedback on this format?


Early feedback you might agree with, and other thoughts.

3 comments

Comments sorted by top scores.

comment by avturchin · 2025-04-26T11:04:10.986Z · LW(p) · GW(p)

I want to share a few considerations:

AI war may eventually collapse to two blocks fighting each other – S.Lem wrote [LW · GW] about this in 1959. 

AI war makes s-risks more likely as non-aligned AI may take humans hostage to influence aligned AI.

AI war may naturally evolve as a continuation of the current drone warfare with automated AI-powered control systems. 

comment by Mitchell_Porter · 2025-04-25T23:06:56.712Z · LW(p) · GW(p)

I take this to mostly be a response to the idea that humanity will be protected by decentralization of AI power, the idea apparently being that your personal AI or your society's AIs will defend you against other AIs if that is ever necessary. 

And what I think you've highlighted, is that this is no good if your defensive AIs are misaligned (in the sense of not being properly human-friendly or even just "you"-friendly), because what they will be defending are their misaligned values and goals. 

As usual, I presume that the AIs become superintelligent, and that the situation evolves to the point that the defensive AIs are in charge of the defense from top to bottom. It's not like running an antivirus program, it's like putting a new emergency leadership in charge of your entire national life. 

Replies from: thenoviceoof
comment by thenoviceoof · 2025-04-26T04:55:44.449Z · LW(p) · GW(p)

The post setup skips the "AIs are loyal to you" bit, but it does seem like this line of thought broadly aligns with the post.

I do think this does not require ASI, but I would agree that including it certainly doesn't help.