Military AI as a Convergent Goal of Self-Improving AI
post by turchin
This is a link post for https://www.academia.edu/35130825/Military_AI_as_a_Convergent_Goal_of_Self-Improving_AI
Comments sorted by top scores.
comment by morganism ·
2017-11-13T23:11:24.416Z · LW(p) · GW(p)
Seems to me at 3.4, the first stage is to bring down all competing nations electric grids, thereby slowing trade, transportation, and hampering retaliation. Think of non-lethal mines. The side effects would be instantaneous, and lethal.
And the military could easily see it as a defensible and deniable action.
this would be trivial to anything hooked into the net, and has access to SCADA coding and grid diagrams.
It would also behoove the local "friendly" AI , to interupt communications in it's home base, to keep outside influences from retaliating against it's base. So you would likely get a comms blackout , except when the AI wanted to communicate it's orders. It would also make sense to set up new frequency hopping algo's on existing lines where that would be possible (i.e. not old POTS lines?)
comment by turchin ·
2017-11-13T11:29:26.878Z · LW(p) · GW(p)
This is our accepted chapter in the edited volume "AI Safety and Security" (Roman Yampolskiy, ed.), CRC Press. Forthcoming, 2018
Replies from: entirelyuseless
↑ comment by entirelyuseless ·
2017-11-13T15:04:33.701Z · LW(p) · GW(p)
People are weakly motivated because even though they do things, they notice that for some reason they don't have to do them, but could do something else. So they wonder what they should be doing. But there are basic things that they were doing all along because they evolved to do them. AIs won't have "things they were doing", and so they will have even weaker motivations than humans. They will notice that they can do "whatever they want" but they will have no idea what to want. This is kind of implied by what I wrote here: except that it is about human beings.