Would it be good or bad for the US military to get involved in AI risk?
post by Grant Demaree (grant-demaree) · 2023-01-01T19:02:30.892Z · LW · GW · 3 commentsThis is a question post.
Contents
Answers 22 Trevor1 9 Thane Ruthenis 8 Caleb Withers 2 Dirichlet-to-Neumann None 3 comments
Meant as a neutral question. I'm not sure whether this would be good or bad on net:
Suppose key elements of the US military took x-risk from misaligned strong AI very seriously. Specifically, I mean:
- Key scientists at the Defense Threat Reduction Agency. They have a giant budget (~$3B/year) and are literally responsible for x-risks. Current portfolio is focused on nuclear risks with some biosecurity
- Influential policy folks at the Office of the Undersecretary of Defense for Policy. Think dignified career civil servants, 2 levels below the most senior political appointees
- Commander's Initiative Group at USSTRATCOM. Folks who have the commander's ear, tend to be well-respected, and have a huge effect on which ideas are taken seriously
Why this would be good:
- The military has far more discretionary budget than anyone else in the world. You could multiply the resources dedicated to AI safety research tenfold overnight
- The military is a huge source of AI risk (in the sense that advancing AI capabilities faster obviously helps the US in competition with China). If key influencers took the risk seriously, they might be more judicious about their capabilities research
- A key policy goal is preventing the sharing of AI capabilities research. The military is very good at keeping things secret and has policy levers to make private companies do the same
- The military is a huge source of legitimacy with the general public. And it seems easier than other routes to legitimacy. I think less than 10 key people actually need to be persuaded on the merits, and everyone else will follow suit
- If the right person agrees, it's literally possible to get one of the best researchers from this community appointed to lead AI safety research for a major government agency, in the same sense that Wernher von Braun led the space program. You just have to be really familiar with the civil service's intricate rules for hiring
Why this would be bad:
- If someone presents the ideas badly, it's possible to poison the well for later. You could build permanent resistance in the civil service to AI safety ideas. And it's really easy to make that mistake: presentations that work in these agencies are VERY different from what works in the tech community
- Even if the agency is persuaded, they could make a noisy and expensive but ultimately useless effort
- A big government project (with lots of middle managers) adds a moral maze element to alignment research
Answers
Have you read the first two chapters of Thomas Schelling's 1966 Arms and Influence? It's around 50 pages.
The general gist is that if a lot of powerful Americans in the DoD take something seriously, such as preventing nuclear war, then foreign intelligence agencies will be able to hold that thing hostage in order to squeeze policy concessions out of the US.
It's a lot more complicated than that, since miscommunication, corruption, compartmentalization, and infighting all muddy the waters of what things are valued by any given military.
↑ comment by Caleb W (caleb-withers-1) · 2023-01-04T18:36:58.838Z · LW(p) · GW(p)
This does seem like an important issue to consider, but my guess is it probably shouldn't be a crux for answering OP's question (or at least, further explanation is needed for why it might be)? Putting aside concerns about flawed pursuit of a given goal, it would be surprising if the benefits of caring about a goal were outweighed by second order harms from competitors extracting concessions.
↑ comment by Grant Demaree (grant-demaree) · 2023-01-02T19:21:28.855Z · LW(p) · GW(p)
I bet that's true
But it doesn't seem sufficient to settle the issue. A world where aligning/slowing AI is a major US priority, which China sometimes supports in exchange for policy concessions sounds like a massive improvement over today's world
The theory of impact here is that there's a lot of policy actions to slow down AI, but they're bottlenecked on legitimacy. The US military could provide legitimacy
They might also help alignment, if the right person is in charge and has a lot of resources. But even if 100% their alignment research is noise that doesn't advance the field, military involvement could be a huge net positive
So the real question is:
- Is the theory of impact plausible
- Are their big risks that mean this does more harm than good
↑ comment by trevor (TrevorWiesinger) · 2023-01-03T00:34:22.297Z · LW(p) · GW(p)
I don't know about "providing legitimacy", that's like spending a trillion dollars in order to procure one single gold toilet seat. Gold toilet seats are great, due to the human signalling-based psychology, but it's not worth the trillion dollars. The military is not built to be easy to steer, that would be a massive vulnerability to foreign intelligence agencies.
Replies from: grant-demaree↑ comment by Grant Demaree (grant-demaree) · 2023-01-03T00:43:04.981Z · LW(p) · GW(p)
My model of "steering" the military is a little different from that It's over a thousand partially autonomous headquarters, which each have their own interests. The right hand usually doesn't know what the left is doing
Of the thousand+ headquarters, there's probably 10 that have the necessary legitimacy and can get the necessary resources. Winning over any one of the 10 is a sufficient condition to getting the results I described above
In other words, you don't have to steer the whole ship. Just a small part of it. I bet that can be done in 6 months
Given that no-one's posted a comment in the affirmative yet:
I'd guess that more US national security engagement with AI risk is good. In rough order why:
- I think the deployment problem [LW · GW] is a key challenge, and an optimal strategy for addressing this challenge will have elements of transnational competition, information security, and enforcement that benefit from or require the national security apparatus.
- As OP points out, there's some chance that the US government/military ends up as a key player advancing capabilities, so it's good for them to be mindful of the risks.
- As OP points out, if funding for large alignment projects seems promising, places like DTRA have large budgets and a strong track record of research funding.
I agree that there are risks with communicating AI risk concepts in a way that poisons the well, lacks fidelity, gets distorted, or fails to cross inferential distances, but these seem like things to manage and mitigate rather than give up on. Illustratively, I'd be excited about bureaucrats, analysts and program managers reading things like Alignment Problem from a Deep Learning Perspective [LW · GW], Unsolved Problems in ML Safety [LW · GW], or CSET's Key Concepts in AI Safety series; and developing frameworks and triggers to consider whether and when cutting-edge AI systems merit regulatory attention as dual use and/or high-risk systems a la the nuclear sector. (I include these examples as things that seem directionally good to me off the top of my head, but I'm not claiming they're the most promising things to push on in this space).
To be honest I'm just as afraid of aligned AGI as of unaligned AGI. An AGI aligned with the values of the PRC seems like a nightmare. If it's aligned with the US army it's only really bad, and Yudkowsky dath illan is not exactly the world I want to live in either...
↑ comment by Grant Demaree (grant-demaree) · 2023-01-02T20:09:11.579Z · LW(p) · GW(p)
I don't agree, because a world of misaligned AI is known to be really bad. Whereas a world of AI successfully aligned by some opposing faction probably has a lot in common with your own values
Extreme case: ISIS successfully builds the first aligned AI and locks in its values. This is bad, but it's way better than misaligned AI. ISIS want to turn the world into an idealized 7th Century Middle East, which is a pretty nice place compared to much of human history. There's still a lot in common with your own values
3 comments
Comments sorted by top scores.
comment by jmh · 2023-01-01T22:42:46.184Z · LW(p) · GW(p)
Is it possible they already are? I could certainly see AI risks being part of the risk associated with both nuclear and bio threats.
I'm not sure, others here with direct exposure can answer better, that funding is a limiting factor at this point. If not then the budget aspect doesn't matter. What other constraints might DoD involvement help relax?
Replies from: adam_scholl, dkirmani↑ comment by Adam Scholl (adam_scholl) · 2023-01-02T03:56:02.536Z · LW(p) · GW(p)
As I understand it, the recent US semiconductor policy updates [LW · GW]—e.g., CHIPS Act, export controls—are unusually extreme [LW(p) · GW(p)], which does seem consistent with the hypothesis that they're starting to take some AI-related threats more seriously. But my guess is that they're mostly worried about more mundane/routine impacts on economic and military affairs, etc., rather than about this being the most significant event since the big bang; perhaps naively, I suspect we'd see more obvious signs if they were worried about the latter, a la physics departments clearing out during the Manhattan Project.