Stopping killer robots, Killer Robots as cultural techniques
post by morganism · 2016-11-22T00:02:06.673Z · LW · GW · Legacy · 2 commentsThis is a link post for http://ics.sagepub.com/content/early/2016/10/06/1367877916671425.abstract
Contents
2 comments
2 comments
Comments sorted by top scores.
comment by morganism · 2016-11-22T00:03:11.004Z · LW(p) · GW(p)
"But what if we coded machine intelligence in such a way that robots don't even make a distinction between human and machine? It's an intriguing idea: If there's no "us" and no "them," there can be no "us versus them."
http://www.seeker.com/stopping-killer-robots-at-the-source-code-2103641544.html
Replies from: Viliam↑ comment by Viliam · 2016-11-22T09:34:37.444Z · LW(p) · GW(p)
But what if we coded machine intelligence in such a way that robots don't even make a distinction between human and machine?
You mean they would be generally too stupid to notice? Or they would have an artificial blind spot? Or they would notice, but then they would feel ashamed for being racist and try to pretend they never noticed?
Not sure how "don't even make a distinction between human and machine" implies not being a killer robot. I mean, humans have already been observed to kill other humans. So there will be robots who will mass-murder humans and other robots indiscriminately. What a relief!
Or maybe they will not make a distinction between humans and robots per se, but will make it indirectly. For example, they may start an eugenic program to exterminate humans, not because they would consider us humans, but because they would consider us retarded robots. (And if you try to convince them that this is wrong, they will just go: what? but there is no difference between a human and a robot! and we discard broken robots all the time.)
The point is that the 'killer robot' as an idea did not emerge out of thin air, (...) It was preceded by techniques and technologies that make the thinking and development of these systems possible.
Yeah, and a killer tiger was preceded by writings of Thomas Hobbes. We just need to make sure no tiger reads Hobbes and we are all safe.
One possible scenario might be to try to think of robots and machine intelligence as social
Wishful thinking -- always the favorite approach to unpleasant problems.
EDIT:
Oh, it's an international journal of cultural studies. I guess, no big surprise then. For a moment I was really scared that this kind of sloppy thinking is considered the state of the art in machine intelligence safety, because that would mean humanity is completely doomed.