[Linkpost] A Chinese AI optimized for killing
post by RomanS · 2022-06-03T09:17:42.028Z · LW · GW · 4 commentsContents
4 comments
The AI depicted in the Terminator movies is rather stupid: there are much more efficient ways to kill all humans than robots with guns.
We can safely ignore the unrealistic Terminator-like scenario of AI X-risk.
...Or can we?
Tsinghua University is a top university located in Beijing. It is heavily involved in research for the Chinese military. One of its military labs is called "The State Key Laboratory of Intelligent Technology and Systems".
In 2021, two of the university's researchers released a paper called Counter-Strike Deathmatch with Large-Scale Behavioural Cloning.
Some highlights:
- The rewards are calculated as r = 1.0K − 0.5D − 0.02F, where K is a kill, D is own death, and F is a shot fired. One could interpret it as follows: 1) the agent must kill, 2) the agent must protect its own existence, as long as such protection does not conflict with the first rule, 3) the agent must spare ammunition, as long as it does not conflict with the first and the second rule.
- "To determine when to stop training, we evaluated the agent after each epoch, measuring kills-per-minute"
- "Kill/death ratio (K/D) is the number of times a player kills an enemy compared to how many times they die. Whilst useful as one measure of an agent’s performance, more information is needed – avoiding all but the most favourable firefights would score a high K/D ratio, but may be undesirable. We therefore also report kills-per-minute (KPM). A strong agent should have both a high KPM and high K/D"
- "In this paper we take on such a challenge; building an agent for Counter-Strike: Global Offensive (CSGO), with no access to an API, and only modest compute resources (several GPUs and one game terminal)."
- "Our solution uses behavioural cloning - training on a large noisy dataset scraped from human play on online servers..."
A video linked in the article:
From the article and the authors' affiliation, I drew the following conclusions:
- It is likely that China is already working on fully autonomous weaponry
- One can already build autonomous weaponry with very modest computational resources and publicly available data
- Efficient autonomous weaponry of mass destruction doesn't require human-level intelligence. In the game environment, even the primitive agent described in the article can kill at the rate of 30 humans per hour, while fighting against skilled armed humans. Slightly more intelligent mass-produced slaughterbots may be able to decimate entire cities, in hours.
- Some researchers really should stop for a minute and ask themselves: maybe I shouldn't build an AI optimized for killing people, while working at a university involved in AI research for the Chinese military?
4 comments
Comments sorted by top scores.
comment by 1a3orn · 2022-06-03T12:04:48.838Z · LW(p) · GW(p)
The article title here is hyperbolic.
The title is misleading in the same way that calling AlphaStar a "a Western AI optimized for strategic warfare" is misleading. Should we also say that the earlier western work on Doom -- see VizDoom -- was also about creating "agents optimized for killing"? That was work on a FPS as well. This is just more of the same -- researchers trying to find interesting video games to work on.
This work transfers with just as much easy / difficulty to real-world scenarios as AI work on entirely non-military-skinned video games -- that is, it would take enormous engineering effort, and any use in military robots would be several levels of further work removed, such that the foundation of a military system would be very different. (I.e., military robots can't work with behavioral cloning based on absolutely unchanging + static environments / maps, with clean command / movement relations, for many reasons). Many researcher's work on navigating environments -- though not military-themed -- would be just as applicable.
Replies from: RomanS↑ comment by RomanS · 2022-06-03T13:11:13.474Z · LW(p) · GW(p)
The title is misleading in the same way that calling AlphaStar a "a Western AI optimized for strategic warfare" is misleading.
That's a fair description of AlphaStar. For example, see this report NATO report (pdf):
Obviously, military people of both NATO and China are trying to apply any promising AI research that they deem relevant for the battlefield. And if your promising research is military-themed, it is much more likely to get their attention. Especially if you're working at a university that does AI research for the military (like the aforementioned Tsinghua University).
Should we also say that the earlier western work on Doom -- see VizDoom -- was also about creating "agents optimized for killing"? That was work on a FPS as well. This is just more of the same -- researchers trying to find interesting video games to work on.
There is a qualitative difference between the primitive pixelated Doom and the realistic CS. The second one is much easier to transfer to the battlefield, because of the much more realistic graphics, physics, military tactics, weaponry.
This work transfers with just as much easy / difficulty to real-world scenarios as AI work on entirely non-military-skinned video games...
Not sure about that. Clearly, CS is much more similar to the real battlefield, than, say, Super Mario. Thus, the transfer should be much easier.
...it would take enormous engineering effort, and any use in military robots would be several levels of further work removed, such that the foundation of a military system would be very different...
Also not sure about that. For example, in the article, one of the simple scenarios they have is a gun turret-like scenario, where the agent is fixed in one place, and is shooting moving targets (that look like real humans). I can imagine that one can put the exact same agent in a real automated turret, and with a suitable middleware it will be capable of shooting down moving targets at decent rates.
The main issue is that once you have a mid-quality agent that can shoot at people, it is trivial to improve its skill, and get it to superhuman levels. The task is much easier than, say, self-driving cars, as the agent's only goal is to maximize the damage, and the agent's body is expendable.
comment by A Ray (alex-ray) · 2022-06-05T21:50:50.326Z · LW(p) · GW(p)
This seems like an overly alarmist take on what is a pretty old trend of research. Six years ago there was a number of universities working on similar models for the VizDoom competition (IIRC they were won by Intel and Facebook). It seems good to track this kind of research, but IMO the conclusions here are not supported at all by the evidence presented.
Replies from: RomanS↑ comment by RomanS · 2022-06-06T06:55:42.322Z · LW(p) · GW(p)
The trend of research is indeed old. But in this case, my alarmism is based on the combination of the following factors:
- the environment is much closer to a real battlefield than Doom etc
- the AI is literally optimized for killing humans (or more precisely, entities that look and behave very much like humans)
- judging by the paper, the AI was surprisingly cheap to create (a couple of GPUs + a few days of publicly available streams). It was also cheap to run in real time (1 GPU)
- the research was done at a university that is doing AI research for the military
- it is China, a totalitarian dictatorship that is currently perpetrating a genocide. And it is known for using AI as one of the tools (e.g. for mass surveillance of Uyghurs)