[Linkpost] A Chinese AI optimized for killing

post by RomanS · 2022-06-03T09:17:42.028Z · LW · GW · 4 comments

The AI depicted in the Terminator movies is rather stupid: there are much more efficient ways to kill all humans than robots with guns. 

We can safely ignore the unrealistic Terminator-like scenario of AI X-risk. 

...Or can we?

Tsinghua University is a top university located in Beijing. It is heavily involved in research for the Chinese military. One of its military labs is called "The State Key Laboratory of Intelligent Technology and Systems".

In 2021, two of the university's researchers released a paper called Counter-Strike Deathmatch with Large-Scale Behavioural Cloning.

Some highlights:

A video linked in the article:

From the article and the authors' affiliation, I drew the following conclusions:

4 comments

Comments sorted by top scores.

comment by 1a3orn · 2022-06-03T12:04:48.838Z · LW(p) · GW(p)

The article title here is hyperbolic.

The title is misleading in the same way that calling AlphaStar a "a Western AI optimized for strategic warfare" is misleading. Should we also say that the earlier western work on Doom -- see VizDoom -- was also about creating "agents optimized for killing"? That was work on a FPS as well. This is just more of the same -- researchers trying to find interesting video games to work on.

This work transfers with just as much easy / difficulty to real-world scenarios as AI work on entirely non-military-skinned video games -- that is, it would take enormous engineering effort, and any use in military robots would be several levels of further work removed, such that the foundation of a military system would be very different. (I.e., military robots can't work with behavioral cloning based on absolutely unchanging + static environments / maps, with clean command / movement relations, for many reasons). Many researcher's work on navigating environments -- though not military-themed -- would be just as applicable.

Replies from: RomanS
comment by RomanS · 2022-06-03T13:11:13.474Z · LW(p) · GW(p)

The title is misleading in the same way that calling AlphaStar a "a Western AI optimized for strategic warfare" is misleading.

That's a fair description of AlphaStar. For example, see this report NATO report (pdf):

From the Game Map to the Battlefield – Using DeepMind's Advanced AlphaStar Techniques to Support Military Decision-Makers

Obviously, military people of both NATO and China are trying to apply any promising AI research that they deem relevant for the battlefield. And if your promising research is military-themed, it is much more likely to get their attention. Especially if you're working at a university that does AI research for the military (like the aforementioned Tsinghua University). 

Should we also say that the earlier western work on Doom -- see VizDoom -- was also about creating "agents optimized for killing"? That was work on a FPS as well. This is just more of the same -- researchers trying to find interesting video games to work on.

There is a qualitative difference between the primitive pixelated Doom and the realistic CS. The second one is much easier to transfer to the battlefield, because of the much more realistic graphics, physics, military tactics, weaponry. 

This work transfers with just as much easy / difficulty to real-world scenarios as AI work on entirely non-military-skinned video games...

Not sure about that. Clearly, CS is much more similar to the real battlefield, than, say, Super Mario. Thus, the transfer should be much easier. 

...it would take enormous engineering effort, and any use in military robots would be several levels of further work removed, such that the foundation of a military system would be very different...

Also not sure about that. For example, in the article, one of the simple scenarios they have is a gun turret-like scenario, where the agent is fixed in one place, and is shooting moving targets (that look like real humans). I can imagine that one can put the exact same agent in a real automated turret, and with a suitable middleware it will be capable of shooting down moving targets at decent rates.

The main issue is that once you have a mid-quality agent that can shoot at people, it is trivial to improve its skill, and get it to superhuman levels. The task is much easier than, say, self-driving cars, as the agent's only goal is to maximize the damage, and the agent's body is expendable. 

comment by A Ray (alex-ray) · 2022-06-05T21:50:50.326Z · LW(p) · GW(p)

This seems like an overly alarmist take on what is a pretty old trend of research.  Six years ago there was a number of universities working on similar models for the VizDoom competition (IIRC they were won by Intel and Facebook).  It seems good to track this kind of research, but IMO the conclusions here are not supported at all by the evidence presented.

Replies from: RomanS
comment by RomanS · 2022-06-06T06:55:42.322Z · LW(p) · GW(p)

The trend of research is indeed old. But in this case, my alarmism is based on the combination of the following factors:

  • the environment is much closer to a real battlefield than Doom etc
  • the AI is literally optimized for killing humans (or more precisely, entities that look and behave very much like humans)
  • judging by the paper, the AI was surprisingly cheap to create (a couple of GPUs + a few days of publicly available streams). It was also cheap to run in real time (1 GPU)
  • the research was done at a university that is doing AI research for the military
  • it is China, a totalitarian dictatorship that is currently perpetrating a genocide. And it is known for using AI as one of the tools (e.g. for mass surveillance of Uyghurs)