[Linkpost] AI War seems unlikely to prevent AI Doom
post by thenoviceoof · 2025-04-25T20:44:48.267Z · LW · GW · 3 commentsThis is a link post for https://thenoviceoof.com/ai/war/ai_war_v1_0.html
Contents
3 comments
This is a linkpost: link to post.
Originally this was part of a much larger work. However, I realized that I don't think I've seen the specific argument around GPU salvage spelled out. Since I'm also using a new writing format, I figured I could get feedback on both the content and format at the same time.
That said, despite the plausible novelty of one of the arguments I don't think it will be especially interesting to LW, since it is based on oddly specific assumptions: this makes more sense in the context of a broad AI risk argument. It also feels kind of obvious?
The format is the interesting bit. For motivation, sometimes people have opposing reactions to AI risk arguments:
- "Your argument is nice and concise, but it doesn't address the one specific objection that I consider to be a crux, so I consider your broader argument invalid."
- "Your argument is 100 pages long? I ain't reading all that, but I'm happy for you. Or sorry that happened."
(This might be a general feature for contentious topics.)
This format focuses on presenting the high level skeleton of an argument, and then allowing readers to drill in where they want. If a reader agrees with many high level points (but not all of them), they can spend their time reading specifically where we disagree.
So! Do you have any feedback on this format?
Early feedback you might agree with, and other thoughts.
- A piece of early feedback is that bullet points are harder to read than prose; perhaps it would be better to have the top level be prose, with supporting foldouts? That is, perhaps the top level as it is... is too austere? The current work was written in an "all bullets, all the time" style that is resisting easy transformation, but let me know if the outline form is a tremendous turn off, compared to something more prose-y.
- A piece of early feedback I got was that they enjoyed reading 100 page theses; I suspect that people that have put time into learning how to read standard long form text may find this new format annoying.
- A related piece of early feedback is that the phone experience requires too much interaction; they wanted something similar that could let them just scroll through everything. I have ideas about how to do this, but if you're reading on a phone I'd be interested to hear what you think.
- AISafety.info has a more traditional outline with 1-2 levels and a chatbot. In comparison, I expect this format to be more crunchy and also help readers keep anchored in an overarching argument, but I would be curious to hear how you think my post's format compares to those resources. Obviously, not all formats are for everyone (I myself commonly think "THIS VIDEO COULD HAVE BEEN AN ESSAY"), but if 10 out of 10 people prefer a different format, that certainly paints a picture.
- It should be possible to move the entire structure into LW, since it also has <details> elements. However, copy-pasting does not appear to work, and I'm not going to move hundreds of details by hand.
- This should support Dark Mode, and have a special mode for phones/narrow screens. Let me know if you see any problems with these modes!
- Unfortunately, Firefox has some known issues.
- Doing a page search (ctrl-f or cmd-f) will not open foldouts automatically if relevant terms are found in the foldout.
- Cross reference links to in page ids won't open foldouts.
- Both bugs seem tied to the same standards spec, which is not yet implemented by Firefox: see more details in Bugzilla.
- (Actually, apparently the dupe'd bug was apparently resolved 23 days ago. If you're on some cutting edge build of FF this might be fixed for you!)
3 comments
Comments sorted by top scores.
comment by avturchin · 2025-04-26T11:04:10.986Z · LW(p) · GW(p)
I want to share a few considerations:
AI war may eventually collapse to two blocks fighting each other – S.Lem wrote [LW · GW] about this in 1959.
AI war makes s-risks more likely as non-aligned AI may take humans hostage to influence aligned AI.
AI war may naturally evolve as a continuation of the current drone warfare with automated AI-powered control systems.
comment by Mitchell_Porter · 2025-04-25T23:06:56.712Z · LW(p) · GW(p)
I take this to mostly be a response to the idea that humanity will be protected by decentralization of AI power, the idea apparently being that your personal AI or your society's AIs will defend you against other AIs if that is ever necessary.
And what I think you've highlighted, is that this is no good if your defensive AIs are misaligned (in the sense of not being properly human-friendly or even just "you"-friendly), because what they will be defending are their misaligned values and goals.
As usual, I presume that the AIs become superintelligent, and that the situation evolves to the point that the defensive AIs are in charge of the defense from top to bottom. It's not like running an antivirus program, it's like putting a new emergency leadership in charge of your entire national life.
Replies from: thenoviceoof↑ comment by thenoviceoof · 2025-04-26T04:55:44.449Z · LW(p) · GW(p)
The post setup skips the "AIs are loyal to you" bit, but it does seem like this line of thought broadly aligns with the post.
I do think this does not require ASI, but I would agree that including it certainly doesn't help.