How does the ever-increasing use of AI in the military for the direct purpose of murdering people affect your p(doom)?
post by Justausername · 2024-04-06T06:31:49.506Z · LW · GW · No commentsThis is a question post.
Contents
Answers 20 johnswentworth 10 ryan_greenblatt 9 Dagon 6 Noosphere89 5 Nathan Helm-Burger 3 Brendan Long None No comments
I haven't personally heard a lot of recent discussions about it, which is strange considering that both startups like Andruil and Palantir are developing systems for military use, OpenAI recently deleted a clause prohibiting the use of its products in the military sector, and the government sector is also working on making AI-piloted drones, rockets, information systems (hello, Skynet and AM), etc.
And the most recent and perhaps chilling use of it comes from the Israel's invasion of Gaza, where Israeli army has marked tens of thousands of Gazans as suspects for assassination, using Lavender AI targeting system with little human oversight and a permissive policy for casualties.
So how does all of it affect your p(doom) and what are your general thoughts on this and how do we counter that?
Relevant links:
https://www.972mag.com/lavender-ai-israeli-army-gaza/
https://www.wired.com/story/anduril-roadrunner-drone/
Answers
It doesn't.
↑ comment by Justausername · 2024-04-06T20:02:33.387Z · LW(p) · GW(p)
How does militarisation of AI and so-called slaughterbots don't affect your p-doom at all? Plus, I mean, we are clearly teaching AI how to kill, giving it more power and direct access to important systems, weapons and information.
Replies from: johnswentworth, Thane Ruthenis↑ comment by johnswentworth · 2024-04-08T18:51:08.964Z · LW(p) · GW(p)
... man, now that the post has been downvoted a bunch I feel bad for leaving such a snarky answer. It's a perfectly reasonable question, folks!
Overcompressed actual answer: core pieces of a standard doom-argument involve things like "killing all the humans will be very easy for a moderately-generally-smarter-than-human AI" and "killing all the humans (either as a subgoal or a side-effect of other things) is convergently instrumentally useful for the vast majority of terminal objectives". A standard doom counterargument usually doesn't dispute those two pieces (though there are of course exceptions); a standard doom counterargument usually argues that we'll have ample opportunity to iterate, and therefore it doesn't matter that the vast majority of terminal objectives instrumentally incentivize killing humans, we'll iterate until we find ways to avoid that sort of thing.
The standard core disagreement is then mostly about the extent to which we'll be able to iterate, or will in fact iterate in ways which actually help. In particular, cruxy subquestions tend to include:
- How visible will "bad behavior" be early on? Will there be "warning shots"? Will we have ways to detect unwanted internal structures?
- How sharply/suddenly will capabilities increase?
- Insofar as problems are visible, will labs and/or governments actually respond in useful ways?
Militarization isn't very centrally relevant to any of these; it's mostly relevant to things which are mostly not in doubt anyways, at least in the medium-to-long term.
↑ comment by Thane Ruthenis · 2024-04-08T21:04:41.743Z · LW(p) · GW(p)
I'd say one of the main reasons is because military-AI technology isn't being optimized towards things we're afraid of. We're concerned about generally intelligent entities capable of e. g. automated R&D and social manipulation and long-term scheming. Military-AI technology, last I checked, was mostly about teaching drones and missiles to fly straight and recognize camouflaged tanks and shoot designated targets while not shooting not designated targets.
And while this still may result in a generally capable superintelligence in the limit (since "which targets would my commanders want me to shoot?" can be phrased as a very open-ended problem), it's not a particularly efficient way to approach this limit at all. Militaries, so far, just aren't really pushing in the directions where doom lies, while the AGI labs are doing their best to beeline there.
The proliferation of drone armies that could be easily co-opted by a hostile superintelligence... It doesn't have no impact on p(doom), but it's approximately a rounding error. A hostile superintelligence doesn't need extant drone armies; it could build its own, and co-opt humans in the meantime.
(Large scale) robot armies moderately increase my P(doom). And the same for large amounts of robots more generally.
The main mechanism is via making (violent) AI takeover relatively easier. (Though I think there is also a weak positive case for robot armies in that they might make relatively less smart AIs more useful for defense earlier which might mean you don't need to build AIs which are as powerful to defuse various concerns.)
Usage of AIs in other ways (e.g. targeting) doesn't have much direct effect particularly if these systems are narrow, but might set problematic precedents. It's also some evidence of higher doom, but not in a way where intervening on the variable would reduce doom.
Ehn. Kind of irrelevant to p(doom). War and violent conflict is disturbing, but not all that much more so with tool-level AI.
Especially in conflicts where the "victims" aren't particularly peaceful themselves, it's hard to see AI as anything but targeting assistance, which may reduce indiscriminate/large-scale killing.
↑ comment by Justausername · 2024-04-08T09:41:39.485Z · LW(p) · GW(p)
I'm being heavily downvoted here, but what exactly did I say wrong? In fact I believe i said nothing wrong.
It does worsen situation with Israel military forces mass murdering Palestinian civilians due to AIs decisions with operators just rubber stamping the actions.
Here is the +972 Mag Report: https://www.972mag.com/lavender-ai-israeli-army-gaza/
I highly advise you to read as it goes into higher details as to how it exactly internally works.
Replies from: Dagon↑ comment by Dagon · 2024-04-08T14:42:41.633Z · LW(p) · GW(p)
I can only speak for myself, but I downvoted for leaning very heavily on a current political conflict, because it's notoriously difficult to reason about generalities due to the mindkilling effect of taking sides. The fact that I seem to be on a different side than you (though there ain't no side that's fully in the right - the whole idea of ethnic and religious hatred is really intractable) is only secondary.
I regret engaging on that level. I should have stuck with my main reaction that "individual human conflict is no more likely to lead to AI doom than nuclear doom". It didn't change the overall probability IMO.
↑ comment by Justausername · 2024-04-06T19:51:02.706Z · LW(p) · GW(p)
I'm sorry but the presented example of Israel's Lavender system shows the exact opposite: it exacerbates an already prevalent mass murder of innocent civilians to an even bigger degree, where operators just rubber stamp the decisions. I'm afraid in that example it does absolutely nothing to reduce the indiscriminate large-scale killing, but directly facilitates it. Its right there in the attached +972 Mag report.
And I'm sorry, but did you mean to call the currently targeted Palestinian civilians an unpeaceful "victims" (in quotes)? Because to me that just sounds barbaric and utterly insane and simply immoral, especially during the time of the ongoing Gazan genocide.
I basically agree with John Wentworth here that it affects p(doom) not at all, but one thing I will say is that it kind of makes claims that humans will make decisions/be accountable once AI gets very useful rather uncredible.
More generally, one takeaway I see from the military's use of AI is that there are strong pressures to let them operate on their own, and this is going to be surprisingly important in the future.
Personally, I have gradually moved to seeing this as lowering my p(doom). I think humanity's best chance is to politically coordinate to globally enforce strict AI regulation. I think the most likely route to this becoming politically feasible is through empirical demonstrations of the danger of AI. I think AI is more likely to be legibly empirically dangerous to political decision-makers if it is used in the military. Thus, I think military AI is, counter-intuitively, lowering p(doom). A big accident that caused military AI to kill thousands of innocent people that the military had not intended to kill could be really great for p(doom).
This is a sad thing to think, obviously. I'm hopeful we can come up with harmless demonstrations of the dangers involved, so that political action will be taken without anyone needing to be killed.
In scenarios where AI becomes powerful enough to present an extinction risk to humanity, I don't expect that the level of robotic weaponry it has control over to matter much. It will have many many opportunities to hurt humanity that look nothing like armed robots and greatly exceed the power of armed robots.
While military robots might be bad for other reasons, I don't really see the path from this to doom. If AI powered weaponry doesn't work as expected, it might kill some people, but it can't repair or replicate itself or make long-term plans, so it's not really an extinction risk.
↑ comment by Justausername · 2024-04-08T06:51:32.774Z · LW(p) · GW(p)
This AI powered weaponry can always be hacked/modified, even talked to perhaps, this all gives way for them to be used in more than a single way. You can't hack a bullet, you can hack an AI powered ship. So singularly they might not be dangerous, but they don't exist in isolation.
Also, militarisation of AI might create systems that are designed to be dangerous, amoral and without any proper oversight. This opens us a to a flood of potential dangers, some that are even hard to predict now.
Replies from: FeepingCreature↑ comment by FeepingCreature · 2024-04-08T11:21:16.188Z · LW(p) · GW(p)
If military AI is dangerous, it's not because it's military. If a military robot can wield a gun, a civilian robot can certainly acquire one as well.
The military may create AI systems that are designed to be amoral, but it will not want systems that overinterpret orders or violate the chain of command. Here as everywhere, if intentional misuse is even possible at all, alignment is critical and unintentional takeoff remains the dominant risk.
In seminal AI safety work Terminator, the Skynet system successfully triggers a world war because it is a military AI in command of the US nuclear arsenal, and thus has the authority to launch ICBMs. This, ironically to how it is usually ridiculed, gets AI risks quite right but grievously misjudges the state of computer security. If Skynet was running on Amazon AWS instead of a military server cluster, it would only be marginally delayed from the same outcome.
The prompting is not the hard part of operating an AI. If you can talk an AI ship into going rogue, a civilian AI can talk it into going rogue. This situation is inherently brimming with doom- it is latently doomed in multiple ways- the military training and direct access to guns merely removes small roadbumps. All the risk materialized at once, when you created an AI that had the cognitive capability to conceive of and implement plans that used a military vessel for its own goals. Whether the AI was specifically trained on this task is, in this case, really not the primary source of danger.
"My AI ship has gone rogue and is shelling the US coastline."
"I hope you learnt a lesson here."
"Yes. I will not put the AI on the ship next time."
"You may be missing the problem here--"
Replies from: Justausername↑ comment by Justausername · 2024-04-08T13:14:33.275Z · LW(p) · GW(p)
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less.
I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more efficient AI (efficient in murdering humans) below the less efficient but more controllable AI. They would want to have an upper edge over the enemy. Always. And if it means sacrificing some controllability or anything else, they might just do that. But they might not even get that, they might get an uncontrollable and error prone AI and no better. Military arent gods, they don't always get what they want. And someone uptop might decide "To hell with it, its good enough" and that will be it.
And to your ship analogy it's one thing to talk a civilian AI vessel into going rogue, it's a different thing entirely to talk a frigate or nuclear submarine into going rogue. The risks are different. One has control over a simple vessel, the other has a control over a whole arsenal. I'm talking about the fact that the second increases risk substantially and should be extremely avoided for security reasons.
I think it still does increase the danger if AI is trained without any moral guidance or any possibility of moral guardrails, but instead trained to murder people and efficiently put humans in harms way. The current AI systems have something akin to Anthropics AI constitution, that tries to put some moral guard-rails and respect for human life and ans human rights, I don't think think that AIs trained for the military are going to have the same principles applied to them in the slightest, in fact its much more likely to be the opposite, since its customary in the military to murder humans. I think the second example poses higher risks than the first one (not saying that the first example is without risks, but I do believe that the first one is still safer). I still think there are levels to this and things that are more or less safe, things that make it harder or easier.
Replies from: korin43↑ comment by Brendan Long (korin43) · 2024-04-08T19:12:40.668Z · LW(p) · GW(p)
When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn't seem more likely to be good at building silicon chips.
I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don't really think there's any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don't need particularly fast reactions to be effective.
No comments
Comments sorted by top scores.