Military AI as a Convergent Goal of Self-Improving AI

post by turchin · 2017-11-13T11:25:39.407Z · LW · GW · Legacy · 6 comments

This is a link post for


Comments sorted by top scores.

comment by J_Thomas_Moros · 2017-11-13T15:23:49.836Z · LW(p) · GW(p)

Not going to sign up with some random site. If you are the author, post a copy that doesn't require signup.

Replies from: turchin
comment by turchin · 2017-11-13T15:42:20.618Z · LW(p) · GW(p)

Thanks, it was not clear to me that it is not visible to non-members.

New link on google drive - also commenting is open.

comment by Corguive · 2017-11-24T17:24:06.055Z · LW(p) · GW(p)

thank you people very much for all the help and information. i'm going to check the link more carefully but as for now, i appreciate a lot your help!

comment by morganism · 2017-11-13T23:11:24.416Z · LW(p) · GW(p)

Seems to me at 3.4, the first stage is to bring down all competing nations electric grids, thereby slowing trade, transportation, and hampering retaliation. Think of non-lethal mines. The side effects would be instantaneous, and lethal. And the military could easily see it as a defensible and deniable action.

this would be trivial to anything hooked into the net, and has access to SCADA coding and grid diagrams.

It would also behoove the local "friendly" AI , to interupt communications in it's home base, to keep outside influences from retaliating against it's base. So you would likely get a comms blackout , except when the AI wanted to communicate it's orders. It would also make sense to set up new frequency hopping algo's on existing lines where that would be possible (i.e. not old POTS lines?)

comment by turchin · 2017-11-13T11:29:26.878Z · LW(p) · GW(p)

This is our accepted chapter in the edited volume "AI Safety and Security" (Roman Yampolskiy, ed.), CRC Press. Forthcoming, 2018

Replies from: entirelyuseless
comment by entirelyuseless · 2017-11-13T15:04:33.701Z · LW(p) · GW(p)

People are weakly motivated because even though they do things, they notice that for some reason they don't have to do them, but could do something else. So they wonder what they should be doing. But there are basic things that they were doing all along because they evolved to do them. AIs won't have "things they were doing", and so they will have even weaker motivations than humans. They will notice that they can do "whatever they want" but they will have no idea what to want. This is kind of implied by what I wrote here: except that it is about human beings.