Near-Term Risk: Killer Robots a Threat to Freedom and Democracy

post by Epiphany · 2013-06-14T06:28:15.906Z · LW · GW · Legacy · 105 comments

Contents

105 comments

A new TED talk video just came out by Daniel Suarez, author of Daemon, explaining how autonomous combat drones with a capability called "lethal autonomy" pose a threat to democracy.  Lethal autonomy is what it sounds like - the ability of a robot to kill a human without requiring a human to make the decision.

He explains that a human decision-maker is not a necessity for combat drones to function.  This has potentially catastrophic consequences, as it would allow a small number of people to concentrate a very large amount of power, ruining the checks and balances of power between governments and their people and the checks and balances of power between different branches of government.  According to Suarez, about 70 countries have begun developing remotely piloted drones (like predator drones), the precursor to killer robots with lethal autonomy.

Daniel Suarez: The kill decision shouldn't belong to a robot

One thing he didn't mention in this video is that there's a difference in obedience levels between human soldiers and combat drones.  Drones are completely obedient but humans can throw a revolt.  Because they can rebel, human soldiers provide some obstacles to limit the power that would-be tyrants could otherwise obtain.  Drones won't provide this type of protection whatsoever.  Obviously, relying on human decision making is not perfect.  Someone like Hitler can manage to convince people to make poor ethical choices - but still, they need to be convinced, and that requirement may play a major role in protecting us.  Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose.  It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.  The amount and variety of power grabs a tyrant with a robot army of sufficient power can get away with is unlimited.

Something else he didn't mention is that because we can optimize technologies more easily than we can optimize humans, it may be possible to produce killer robots in less time than it takes to build armies of human soldiers and with less expense than training and paying those soldiers.  Considering the salaries and benefits paid to soldiers and the 18 year wait time on human development, it is possible that an overwhelmingly large army of killer robots could be built more quickly than human armies and with fewer resources.

Suarez's solution is to push for legislation that makes producing robots with lethal autonomy illegal.  There are, obviously, pros and cons to this method.  Another method (explored in Daemon) is that if the people have 3-D printers, then the people may be able to produce comparable weapons which will then check and balance their government's power.  This method has pros and cons as well. I came up with a third method which is here.  I think it's better than the alternatives but I would like more feedback.

As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI (MIRI is interested in the existential risks posed by AGI).  That means it's up to us - the people - to develop our understanding of this subject and spread the word to others.  Of all the forums on the internet, LessWrong is one of the most knowledgeable when it comes to artificial intelligence, so it's a logical place to fire up a discussion on this.  I searched LessWrong for terms like "checks and balances" and "Daemon" and I just don't see evidence that we've done a group discussion on this issue.  I'm starting by proposing and exploring some possible solutions to this problem and some pros and cons of each.

To keep things organized, let's put each potential solution, pro and con into a separate comment.

105 comments

Comments sorted by top scores.

comment by wedrifid · 2013-06-14T15:05:31.737Z · LW(p) · GW(p)

It has recently been suggested (by yourself) that:

Perhaps a better question would be "If my mission is to save the world from UFAI, should I expend time and resources attempting to determine what stance to take on other causes?" No matter your level of potential to learn multiple subjects, investing that time and energy into FAI would, in theory, result in a better outcome with FAI - though I am becoming increasingly aware of the fact that there are limits to how good I can be with subjects I haven't specialized in and if you think about it, you may realize that you have limitations as well.

It seems to me that the relevance of economic growth to FAI chances is closer to Eliezer's area of expertise, influence and comparative advantage than the determination of laws controlling military technology is to anyone here. Why is it worth evaluating and expressing opinions on this subject?

(Personally I am happy to spend some time talking about such things for the same reason that I spend some time talking about the details and implications of time travel in HPMoR fanfiction.)

Replies from: Epiphany
comment by Epiphany · 2013-06-14T19:59:23.523Z · LW(p) · GW(p)

I do make that mistake sometimes, however, this is not one of those times:

  • A. Whether I am knowledgeable here isn't very important (as opposed to the context in which I wrote that comment).

    I am not even advising people to agree on a particular strategy, I am spreading the word and getting them to think about it. Even if I tried to advise them, I don't expect LessWrong would take my ideas at face value and blindly follow them. In this case, evaluating and expressing opinions on this subject serves the purpose of getting people to think. Getting people to think is important in this case because this particular problem is likely to require that a large number of people get involved in their own fate. They're the ones that currently provide the checks and balances on government power. If they simply let the powerful decide amongst themselves, they may find that the powerful choose to maximize their power. Unfortunately, I don't currently know of anyone who is qualified and trustworthy enough to advise them on what's likely to happen and which method is likely to succeed, but at least stirring up debate and discussion will get them thinking about this. The more people think about it now, the more likely they are to have a decently well informed opinion and make functional choices later on. My knowledge level is adequate for this particular purpose.

  • B. Why should I specifically do this? Several reasons, actually:
  • Nobody else is currently doing it for us:

    There are no parties of sufficient size that I know of who are taking responsibility for spreading the word on this to make sure that a critical mass is reached. I've scoured the internet and not found a group dedicated to this. The closest we have, to my knowledge, is Suarez. Suarez is an author, and he seems bright and dedicated to spreading the word. I'm sure he's done research and put thought into this, and he is getting attention, but he's not enough. This cause needs an effort much larger and much more well-researched than one guy can pull off.

  • I "get it", but not everyone does.

    My areas of knowledge are definitely not optimal for this and I have no intentions of dedicating my life to this issue, but as a person who "gets it", I can perhaps convince a small group of relevant people (people who are likely to be interested in the subject) to seriously consider the issue. As we have seen, I have a greater understanding of this issue than some of the posters - I am explaining things like how land mines are not even comparable to killer robots in terms of their potential to win wars / wreck democracy. Somebody who "gets it" needs to be around to explain these kinds of things, or there may not be enough people in the group who "get it". I am mildly special because I "get it" and am willing to discuss this so other people "get it".

  • I am aware of this risk sooner than they are.

    Perhaps most important: I am aware of this risk sooner. (Explained in my next point.)

  • C. What I am doing is actually much bigger than it looks.

    I've seen the LessWrong Google Analytics. Some posts have accumulated 200,000+ visits over time. As I understand it, word spreads in an exponential fashion. Therefore, the more people that know about this in the beginning, the more people will know about it later. Even if this post got only 1,000 reads, entering those 1,000 reads into the beginning of the exponential growth curve is likely to result in many, many times as many people knowing about this. My post could, over the years, result in millions of people finding out about this sooner.

    It only takes a relatively small investment for me to help spread the word about this and I view the benefits as being worth that investment.

Conclusion:

If you "get it" and you care about this risk, I urge you to do the same thing. Post about this on Facebook, on Twitter, on other forums - wherever you have the ability to get a group of people to think about this. The couple of minutes it takes to tell 20 people now could mean that hundreds of people find out sooner. If any of you decide to spread the word, comment. I'd like to know.

Replies from: wedrifid
comment by wedrifid · 2013-06-15T15:29:01.983Z · LW(p) · GW(p)

If you "get it" and you care about this risk, I urge you to do the same thing. Post about this on Facebook, on Twitter, on other forums - wherever you have the ability to get a group of people to think about this. The couple of minutes it takes to tell 20 people now could mean that hundreds of people find out sooner. If any of you decide to spread the word, comment. I'd like to know.

I perceive plenty of risks regarding future military technology that are likely to result in the loss of life and liberty. People with power no longer requiring the approval (or insufficient disapproval) of other human participants to maintain their power is among the dangers. Increased ease of creating extremely destructive weapons (including killer robots) without large scale enterprise (eg. with 3D printers you mentioned) is another.

This issue is not one I expect to have any influence over. This is a high stakes game. A national security issue and an individual 'right to bear arms' issue rolled into one. It is also the kind of of game where belief in doomsday predictions is enough to make people (or even a cause) lose credibility. To whatever extent my actions could have an influence at all I have no particular confidence that it would be in a desirable direction.

Evangelism is not my thing. Even if it was, this wouldn't be the cause I chose to champion.

Replies from: Epiphany
comment by Epiphany · 2013-06-16T01:11:27.222Z · LW(p) · GW(p)

This issue is not one I expect to have any influence over.

I don't expect to have a large influence over it, but for a small investment, I make a small difference. You said once yourself that if your life could make even a miniscule difference to the probability that humanity survives, it would be worth it. And if a 1/4,204,800 sized fraction of my life makes a 0.000000001% difference in the chance that humanity doesn't lose democracy, that's worth it to me. Looking at it that way, does my behavior make sense?

It is also the kind of of game where belief in doomsday predictions is enough to make people (or even a cause) lose credibility.

Ok. I feel like you should be saying that to yourself - you're the one who said you thought the 3-D printer idea would result in everyone dying. I think the worst thing I said is that killer robots are a threat to democracy. Did you find something in my writing that you pattern matched to "doomsday prediction"? If so, I will need an example.

Evangelism is not my thing. Even if it was, this wouldn't be the cause I chose to champion.

Spending 1/4,204,800 of my life to spread the word about something is best categorized as "doing my part" not "championing a cause". Like I said in my last comment:

"I have no intentions of dedicating my life to this issue."

After considering the amount of time I spent on this and the clear statement of my intentions (or lack of intentions), do you agree that I was never trying to champion this cause and was simply doing my part, wedrifid?

Replies from: wedrifid
comment by wedrifid · 2013-06-16T06:05:39.476Z · LW(p) · GW(p)

Looking at it that way, does my behavior make sense?

I suggested that Eliezer's analysis of economic growth and FAI is more relevant to Eliezer (in terms of his expertise, influence and comparative advantage) than military robot politics is to all of us (on each of the same metrics). To resolve the ambiguity there, I do not take the position that talk of robot killers is completely worthless. Instead I take the position that Eliezer spending a day or so analysing economic growth impacts on his life's work is entirely sensible. So instead of criticising your behavior I am criticising your criticism of another behaviour that is somewhat similar.

Ok. I feel like you should be saying that to yourself - you're the one who said you thought the 3-D printer idea would result in everyone dying.

I perceive a difference between the social consequences of replying with a criticism of a "right to bear automated-killer-robot arms" proposal in a comment and the social consequences of spreading the word to people I know (on facebook, etc.) about some issue of choice.

I think the worst thing I said is that killer robots are a threat to democracy.

Yes. My use of 'doomsday' to describe that scenario is lax. Please imagine that I found a more precise term and expressed approximately the same point.

After considering the amount of time I spent on this and the clear statement of my intentions (or lack of intentions), do you agree that I was never trying to champion this cause and was simply doing my part, wedrifid?

Please note that the quote that mentions 'championing a cause' was explicitly about myself. It was not made as a criticism of your behavior. It was made as a direct, quote denoted reply to your call for readers (made in response to myself) to evangelise to people we know on 'facebook, twitter and other forums'. I was explaining why I do not choose to do as you request even though by my judgement I do, in fact, "get it".

Taking a stance and expressing concern about something that isn't a mainstream issue comes with a cost. Someone who is mainstream in all ways but one tends to be more influential when it comes to that one issue than someone who has eccentric beliefs in all areas.

Replies from: Epiphany
comment by Epiphany · 2013-06-16T23:31:19.719Z · LW(p) · GW(p)

So instead of criticising your behavior I am criticising your criticism of another behaviour that is somewhat similar.

Oh okay.

I perceive a difference between the social consequences of replying ...

I see. I thought you were making some different comparison.

Yes. My use of 'doomsday' to describe that scenario is lax. Please imagine that I found a more precise term and expressed approximately the same point.

Okay. (:

Please note that the quote that mentions 'championing a cause' was explicitly about myself.

Okay, noted.

I was explaining why I do not choose to do as you request even though by my judgement I do, in fact, "get it".

I'm glad that you get it enough to see the potential benefit of spreading the word even though you choose not to because you anticipate unwanted social consequences instead.

Taking a stance and expressing concern about something that isn't a mainstream issue comes with a cost. Someone who is mainstream in all ways but one tends to be more influential when it comes to that one issue than someone who has eccentric beliefs in all areas.

Hahaha! Yeah, I can see that. Though this really depends on who your friends are or which friend group one chose to spread the idea to.

At this stage, it is probably best to spread the word only to those who Seth Godin calls "early adopters" (defined as: people who want to know everything about their subject of interest aka nerds).

This would be why I told LessWrong as opposed to some other group.

comment by GeraldMonroe · 2013-06-16T15:13:54.725Z · LW(p) · GW(p)

Let's talk actual hardware.

Here's a practical, autonomous kill system that is possibly feasible with current technology. A network of drone helicopters armed with rifles and sensors that can detect the muzzle flashes, sound, and in some cases projectiles of an AK-47 being fired.

Sort of this aircraft : http://en.wikipedia.org/wiki/Autonomous_Rotorcraft_Sniper_System

Combined with sensors based on this patent : http://www.google.com/patents/US5686889

http://en.wikipedia.org/wiki/Gunfire_locator

and this one http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1396471&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F9608%2F30354%2F01396471

The hardware and software would be optimized for detecting AK-47 fire, though it would be able to detect most firearms. Some of these sensors work best if multiple platforms armed with the same sensor are spread out in space, so there would need to be several of these drones hovering overhead for maximum effectiveness.

How would this system be used? Whenever a group of soldiers leaves the post, they would all have to wear blue force trackers that clearly mark them as friendly. When they are at risk for attack, a swarm of drones follows them overhead. If someone fires at them, the following autonomous kill decision is made

if( SystemIsArmed && EventSmallArmsFire && NearestBlueForceTracker > X meters && ProbableError < Y meters) ShootBack();

Sure, a system like this might make mistakes. However, here's the state of the art method used today :

http://www.youtube.com/watch?list=PL75DEC9EEB25A0DF0&feature=player_detailpage&v=uZ2SWWDt8Wg

This same youtube channel has dozens of similar combat videos. An autonomous killing drone system would save soldier's lives and kill fewer civilians. (drawbacks include high cost to develop and maintain)

Other, more advanced systems are also at least conceivable. Ground robots that could storm a building, killing anyone carrying a weapon or matching specific faces? The current method is to blow the entire building to pieces. Even if the robots made frequent errors, they might be more effective than bombing the building.

Replies from: Epiphany
comment by Epiphany · 2013-06-16T23:22:08.734Z · LW(p) · GW(p)

Thanks for the hardware info.

An autonomous killing drone system would save soldier's lives and kill fewer civilians.

In the short-term... What do you think about the threat they pose to democracy?

drawbacks include high cost to develop and maintain

Do you happen to know how many humans need to be employed for a given quantity of these weapons to be produced?

Replies from: GeraldMonroe
comment by GeraldMonroe · 2013-06-17T18:25:44.489Z · LW(p) · GW(p)

I wanted to make a concrete proposal. Why does it have to be autonomous? Because in urban combat, the combatants will usually choose a firing position that has cover. They "pop up" from the cover, take a few shots, then position themselves behind cover again. An autonomous system could presumably accurately return fire much faster than human reflexes. (it wouldn't be instant, there's a delay for the servos of the automated gun to aim at the target, and delays related to signals - you have to wait for the sound to reach all the acoustic sensors in the drone swarm, then there's processing delays, then time for the projectiles from the return fire to reach the target)

Also, the autonomous mode would hopefully be chosen only as a last resort, with a human normally in the loop somewhere to authorize each decision to fire.

As for a threat to democracy? Defined how? You mean a system of governance where a large number of people, who are easily manipulated via media, on the average know fuck-all about a particular issue, are almost universally not using rational thought, and the votes give everyone a theoretically equal say regardless of knowledge or intelligence?

I don't think that democracy is something that should be used as an ideal nor a terminal value on this website. It has too many obvious faults.

As for humans needing to be employed : autonomous return fire drones are going to be very expensive to build and maintain. That "expense" means that the labor of thousands is needed somewhere in the process.

However, in the long run, obviously it's possibly to build factories to churn them out faster than replacing soldiers. Numerous examples of this happened during ww2, where even high technology items such as aircraft were easier to replace than the pilots to fly them.

Replies from: Houshalter
comment by Houshalter · 2014-03-10T06:29:51.352Z · LW(p) · GW(p)

Democracy is imperfect, but dictatorships are worse.

As for humans needing to be employed : autonomous return fire drones are going to be very expensive to build and maintain. That "expense" means that the labor of thousands is needed somewhere in the process.

I honestly don't think this is the case. Hobbyists working on their own with limited budgets have made autonomous paintball guns, as well as all sorts of other robots and UAVs. Conceivably robots could be incredibly cheap, much much cheaper than the average soldier.

comment by Qiaochu_Yuan · 2013-06-14T08:37:39.840Z · LW(p) · GW(p)

Calling this "AI risk" seems like a slight abuse of the term. The term "AI risk" as I understand it refers to risks coming from smarter-than-human AI. The risk here isn't that the drones are too smart, it's that they've been given too much power. Even a dumb AI can be dangerous if it's hooked up to nuclear warheads.

Replies from: wedrifid, Luke_A_Somers, Epiphany, Epiphany
comment by wedrifid · 2013-06-14T14:42:57.278Z · LW(p) · GW(p)

Calling this "AI risk" seems like a slight abuse of the term. The term "AI risk" as I understand it refers to risks coming from smarter-than-human AI.

I was about to voice my agreement and suggest that if people want to refer of this kind of thing (killer robots, etc) "AI risk" in an environment where AI risk refers more typically to strong AGI then it worth at least including a qualifier such as "(weak) AI risk" to prevent confusion. However looking at the original post it seems the author already talks about "near-term tool AI" as well as explicitly explaining the difference between that and the kind of thing MIRI warns about.

Replies from: Epiphany
comment by Epiphany · 2013-06-14T17:51:31.522Z · LW(p) · GW(p)

I originally had "AI risk" in there, but removed it. True that I think we should seriously consider that stupid AIs can pose a major threat, and that the term "AI risk" shouldn't leave that out, but if people might ignore my message for that reason, it makes more sense to change the wording, so I did.

comment by Luke_A_Somers · 2013-06-14T14:24:54.856Z · LW(p) · GW(p)

The issue seems to me AI that have too much power over people without being friendly. Whether they get this power by being handed a gun or by outsmarting us doesn't seem as relevant.

comment by Epiphany · 2013-06-14T09:03:02.918Z · LW(p) · GW(p)

The risk here isn't that the drones are too smart, it's that they've been given too much power.

No. Actually. That is not the risk I'm discussing here. I would not argue that it isn't dangerous to give them the ability to kill. It is. But I do argue that my point here is that lethal autonomy could give people too much power - that is to say, to redistribute power unevenly, undoing all the checks and balances and threatening democracy.

comment by Epiphany · 2013-06-14T09:01:02.869Z · LW(p) · GW(p)

According to this Wikipedia page, the Computer History Museum appears to think Deep Blue, the chess playing software, belongs in the "Artificial Intelligence and Robotics" gallery. It's not smarter than a human - all it can do is play a game and beating humans at a game does not qualify as being smarter than a human.

The dictionary doesn't define it that way, apparently all it needs to do is something like perceive and recognize shapes.

And what about the term "tool AI"?

Why should I agree that AI always means "smarter than human"? I thought we had the term AGI to make that distinction.

Maybe your point here is not that AI always means "smarter than human" but that "AI risk" for some reason necessarily means the AI has to be smarter than humans for it to qualify as an AI risk. I would argue that perhaps we misunderstand risks posed by AI - that software can certainly be quite dangerous because of it's intelligence even if it is not as intelligent as humans.

comment by fubarobfusco · 2013-06-14T21:51:18.103Z · LW(p) · GW(p)

(Trigger warning for atrocities of war.)

Human soldiers can revolt against their orders, but human soldiers can also decide to commit atrocities beyond their orders. Many of the atrocities of war are specifically human behaviors. A drone may bomb you or shoot you — very effectively — but it is not going to decide to torture you out of boredom, rape you in front of your kids, or cut off your ears for trophies. Some of the worst atrocities of recent wars — Vietnam, Bosnia, Iraq — have been things that a killer robot simply isn't going to do outside of anthropomorphized science-fantasy fiction.

The orders given to an autonomous drone, and all of the major steps of its decision-making, can be logged and retained indefinitely. Rather than advocating against autonomous drone warfare, it would be better to advocate for accountable drone warfare.

Replies from: WingedViper, Epiphany
comment by WingedViper · 2013-06-14T22:26:37.897Z · LW(p) · GW(p)

That is indeed a fair point, but I think it is not so important when talking about a tyrant gaining control of his own country. Because the soldiers in Iraq, Bosnia etc. saw the people they tortured (or similar) not as people, but as "the Enemy". That kind of thing is much harder to achieve when they are supposed to be fighting their own countrymen.

comment by Epiphany · 2013-06-15T00:19:43.454Z · LW(p) · GW(p)

I agree that the killer robots on the horizon won't have a will to commit atrocities (though I'm not sure what an AGI killer robot might do), however, I must note that this is a tangent.

The meaning of the term "atrocity" in my statement was more to indicate things like genocide and oppression. I was basically saying "humans are capable of revolting in the event that a tyrant wants to gain power whereas robots are not".

I think I'll replace the word atrocities for clarity.

comment by CarlShulman · 2013-06-14T18:32:53.299Z · LW(p) · GW(p)

Consider this - it's unthinkable that today's American soldiers might suddenly decide this evening to follow a tyrannical leader whose goal is to have total power and murder all who oppose. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risking a mutiny.

Yes, this is a problem.

As far as I know, no organization, not even MIRI (I checked), is dedicated to preventing the potential political disasters caused by near-term tool AI

This is the sort of thing that machine ethics people spend their time on (although they spend more time on law-of-war issues that arise with existing technology than on internal checks-and-balances).

Replies from: Epiphany
comment by Epiphany · 2013-06-14T19:33:42.524Z · LW(p) · GW(p)

I absolutely scoured the internet about 6 months ago looking for any mention of checks and balances, democracy, power balances and killing robots, AI soldiers, etc. (I used all the search terms I could think of to do this) and didn't find them. Is this because they're miniscule in size, don't publish much, use special jargon or for some other reason?

Do you know whether any of them are launching significant public education campaigns? (I assume not, based on what I have seen, but I could be wrong.)

I would very much like links to all relevant web pages you know of that talk specifically about the power imbalances caused by using machines for warfare. Please provide at least a few of the best ones if it is not too much to ask.

Thanks for letting me know about this. I have asked others and gotten no leads!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T17:52:52.932Z · LW(p) · GW(p)

When killer robots are outlawed, only rogue nations will have massive drone armies.

An ideal outcome here would be if counter-drones have an advantage over drones, but it's hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones. A second-best outcome would be no asymmetrical advantage of guerilla drone warfare, where the wealthiest nation clearly wins via numerical drone superiority combined with excellent enemy drone detection.

...you know, at some point the U.S. military is going to pay someone $10 million to conclude what I just wrote and they're going to get it half-wrong. Sigh.

Replies from: Yosarian2, atucker, shminux, Epiphany
comment by Yosarian2 · 2013-06-17T20:49:28.923Z · LW(p) · GW(p)

When killer robots are outlawed, only rogue nations will have massive drone armies.

That's not necessarily a huge issue. If all the major powers agree to not have automated killing drones, and a few minor rogue states (say, Iran) ignore that and develop their own killer drones, then (at least in the near term) that probably won't give them a big enough advantage over semi-autonomous drones controlled by major nations to be a big deal; an Iranian automated drone army probably still isn't a match for the American military, the American military has too many other technological advantages.

On the other hand, if one or more major powers start building large numbers of fully autonomous drones, then everyone is going to. That defiantly sounds like a scenario we should try to avoid, especially since that kind of arms race is something that I could see eventually leading to unfriendly AI.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-18T00:07:53.515Z · LW(p) · GW(p)

One issue is how easy it is to secretly build an army of autonomous drones?

Replies from: Yosarian2
comment by Yosarian2 · 2013-06-18T01:21:02.312Z · LW(p) · GW(p)

Developing the technology in secret is probably quite possible. Large-scale deployment, though, building a large army of them, would probably be quite hard to hide, especially from modern satellite photography and information technology.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-21T03:43:04.900Z · LW(p) · GW(p)

Why? Just build a large number of non-autonomous drones and then upgrade the software at the last minute.

Replies from: Yosarian2
comment by Yosarian2 · 2013-06-21T20:01:44.231Z · LW(p) · GW(p)

I suppose. Would that really give you enough of an advantage to be worth the diplomatic cost, though? The difference between a semi-autonomous Predator drone and a fully-autonomous Predator drone in military terms doesn't seem all that significant.

Now, you could make a type of military unit that would really take advantage of being fully autonomous and have a real advantage, like a fully autonomous air-to-air fighter for example (not really practical to do with semi autonomous drones because of delayed reaction time), but it would seem like that would be much harder to hide.

comment by atucker · 2013-06-14T23:15:07.024Z · LW(p) · GW(p)

I think that if you used an EMP as a stationary counter-drone you would have an advantage over drones in that most drones need some sort of power/control in order to keep on flying, and so counter-drones would be less portable, but more durable than drones.

Replies from: Epiphany
comment by Epiphany · 2013-06-15T01:33:36.611Z · LW(p) · GW(p)

Is there not a way to shield combat drones from EMP weapons? I wouldn't be surprised if they are already doing that.

Replies from: atucker
comment by atucker · 2013-06-15T01:41:10.707Z · LW(p) · GW(p)

Almost certainly, but the point that stationary counter-drones wouldn't necessarily be in a symmetric situation to counter-counter-drones holds. Just swap in a different attack/defense method.

Replies from: Epiphany
comment by Epiphany · 2013-06-15T01:49:02.210Z · LW(p) · GW(p)

I see. The existence of the specific example caused me to interpret your post as being about a specific method, not a general strategy.

To the strategy, I say:

I've heard that defense is more difficult than offense. If the strategy you have defined is basically:

Original drones are offensive and counter-drones are defensive (to prevent them from attacking, presumably).

Then if what I heard was correct, this would fail. If not at first, then likely over time as technology advanced and new offensive strategies are used with the drones.

I'm not sure how to check to see if what I heard was true but if defense worked that well, we wouldn't have war.

Replies from: atucker
comment by atucker · 2013-06-15T08:51:25.963Z · LW(p) · GW(p)

This distinction is just flying/not-flying.

Offense has an advantage over defense in that defense needs to defend against more possible offensive strategies than offense needs to be capable of doing, and offense only needs one undefended plan in order to succeed.

I suspect that not-flying is a pretty big advantage, even relative to offense/defense. At the very least, moving underground (and doing hydroponics or something for food) makes drones just as offensively helpful as missles. Not flying additionally can have more energy and matter supplying whatever it is that it's doing than flying, which allows for more exotic sensing and destructive capabilities.

Replies from: ikrase
comment by ikrase · 2013-06-17T22:39:38.553Z · LW(p) · GW(p)

Also, what's offense and what's defense? Anti-aircraft artillery (effective against drones? I think current air drones are optimized for use against low-tech enemies w/ few defenses) is a "defense" against 'attack from the air', but 'heat-seeking AA missles', 'flack guns', 'radar-guided AA missiles' and 'machine gun turrets' are all "offenses" against combat aircraft where the defenses are evasive maneuvers, altitude, armor, and chaff/flare decoys.

In WWI, defenses (machine guns and fortifications) were near-invincible, and killed attackers without time for them to retreat.

I think that current drones are pretty soft and might even be subject to hacking (seem to remember somethign about unencrypted video?) but that would change as soon as somebody starts making real countermeasures.

comment by Shmi (shminux) · 2013-06-14T18:01:43.370Z · LW(p) · GW(p)

at some point the U.S. military is going to pay someone $10 million to conclude what I just wrote

Gain enough status to make that someone likely to be you.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-06-14T18:32:43.766Z · LW(p) · GW(p)

That is not how government contracts work.

comment by Epiphany · 2013-06-15T00:23:50.839Z · LW(p) · GW(p)

This took effort to parse. I think what you're saying is:

  • If we're going to have killer drones, there needs to be something to check their power. Example: counter-drones.

  • If we're going to have counter-drones, we need to check the power of the counter-drones. Example: counter-counter-drones.

  • If counter-counter-drones can dominate the original drones, then counter-drones probably aren't strong enough to check and balance the original drones. (Either because the counter-counter-drones will become the new original drones or because the counter-drones would be intentionally less powerful than the original drones so that the counter-counter-drones could counter them, making the counter-drones useless.)

(I want everyone to understand, so I'm writing it all out - let me know if I'm right.)

And you propose "no asymmetrical advantage of guerilla drone warfare... etc" which isn't clear to me because I can interpret multiple meanings:

  • Trash the drones vs. counter-drones vs. counter-counter-drones idea?

  • Make sure drones don't have an advantage at guerilla drone warfare?

  • Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?

What did your statement mean?

I think if we're going to check the power of killing drones, we need to start with defining the sides using a completely different distinction unlike "drone / counter-drone". Reading this gave me a different idea for checking and balancing killer robots and advanced weapons. I can see some potential cons to it, but I think it might be better than the alternatives. I'm curious about what pros and cons you would think of.

Replies from: wedrifid
comment by wedrifid · 2013-06-15T16:30:04.598Z · LW(p) · GW(p)

(I want everyone to understand, so I'm writing it all out - let me know if I'm right.)

This isn't quite what Eliezer said. In particular Eliezer wasn't considering proposals or 'what we need' but instead making observations about scenarios and the implications they could have. The key point is the opening sentence:

When killer robots are outlawed, only rogue nations will have massive drone armies.

This amounts to dismissing Suarez's proposal to make autonomous killer robots illegal as absurd. Unilaterally disarming oneself without first preventing potential threats from having those same weapons is crazy for all the reasons it usually is. Of course there is the possibility of using the threat of nuclear strike against anyone who creates killer robots but that is best considered a separate proposal and discussed on its own terms.

An ideal outcome here would be if counter-drones have an advantage over drones, but it's hard to see how this could obtain when counter-counter-drones should be in a symmetrical position over counter-drones.

This isn't saying we need drones (or counter or counter-counter drones). It rather saying:

  • We don't (yet) know the details of the relevant technology will develop or the relative strengths and weaknesses thereof.
  • It would great if we discovered that for some reason it is easier to create drones that kill drones than drones that hurt people. That would mean that defence has an advantage when it comes to drone wars. That will result in less attacking (with drones) and so the drone risk would be much, much lower. (And a few other desirable implications...)
  • The above doesn't seem likely. Bugger.

Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?

This wouldn't be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights. It amounts to the same thing.

Replies from: Epiphany
comment by Epiphany · 2013-06-16T01:01:25.833Z · LW(p) · GW(p)

This amounts to dismissing Suarez's proposal to make autonomous killer robots illegal as absurd.

Yeah, I got that, and I think that his statement is easy to understand so I'm not sure why you're explaining that to me. If you hadn't noticed this, I wrote out various cons for the legislation idea which were either identical in meaning to his statement or along the same lines as "making them illegal is absurd". He got several points for that and his comment put at the top of the page. I wrote them first and was evidently ignored (by karma clickers if not by you).

This isn't saying we need drones (or counter or counter-counter drones). It rather saying:

I didn't say that he was saying that either.

This wouldn't be any form of formal agreement. Instead, people who are certain to lose tend to be less likely to get into fights.

I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn't think he was talking about that - it doesn't seem like a thing that needs saying. Maybe what he meant was not "people will decide whether to fight based on whether it's likely to succeed" or "people will make formal agreements" but something more like "using killer robots would increase the amount or quality of data we have in a significant way and this will encourage that kind of decision-making".

What if that's not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...

Now "the great filter" comes to mind again. :|

Do you know of anyone who has written about:

A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it's more likely for people to initiate wars when there's a lot of uncertainty.

We might be lucky - maybe people are far less likely to initiate wars if it isn't clear who will win... I'd like to read about this topic if there's information on it.

Replies from: wedrifid
comment by wedrifid · 2013-06-16T06:37:42.853Z · LW(p) · GW(p)

Yeah, I got that, and I think that his statement is easy to understand so I'm not sure why you're explaining that to me.

  • You wrote a comment explaining what Eliezer meant.
  • You were wrong about what Eliezer meant.
  • You explicitly asked to be told whether you were right.
  • I told you you were not right.
  • I made my own comment explaining what Eliezer's words mean.

Maybe you already understood the first sentence of Eliezer's comment and only misunderstood the later sentences. That's great! By all means ignore the parts of my explanation that are redundant.

Note that when you make comments like this, including the request for feedback, then getting a reply like mine is close to the best case scenario. Alternatives would be finding you difficult to speak to and just ignoring you and dismissing what you have to say in the entire thread because this particular comment is a straw man.

The problem that you have with with my reply seems to be caused by part of it being redundant for the purpose of facilitating your understanding. But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing. I cannot realistically be expected to perfectly model which parts of Eliezer's comment you interpreted correctly and which parts you did not. After all that task is (strictly) more difficult than the task of interpreting Eliezer's comment correctly. The best I can do is explain Eliezer's comment in my own words and you can take or leave each part of it.

I wrote them first and was evidently ignored (by karma clickers if not by you).

It is frustrating not being rewarded for one's contributions when others are.

I didn't say that he was saying that either.

Let me rephrase. The following quote is not something Eliezer said:

If we're going to have killer drones, there needs to be something to check their power. Example: counter-drones.


I agree that a formal agreement would be meaningless here, but that people will make a cost-benefit analysis when choosing whether to fight is so obvious I didn't think he was talking about that - it doesn't seem like a thing that needs saying.

Eliezer didn't say it. He assumed it (and/or various loosely related considerations) when he made his claim. I needed to say it because rather than assuming a meaning like this 'obvious' one, you assumed that it was a proposal:

Decide who wins wars based on who has more drones and drone defenses instead of actually physically battling?


What if that's not the case, though? What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars...

Yes. That would be bad. Eliezer is making the observation that if technology evolves in such a way (and it seems likely) then it would be less desirable than if for some (somewhat surprising technical reason) the new dynamic did not facilitate asymmetric warfare.

Now "the great filter" comes to mind again.

Yes. Good point.

Do you know of anyone who has written about:

A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it's more likely for people to initiate wars when there's a lot of uncertainty.

I do not know, but am interested.

Replies from: Epiphany, Epiphany
comment by Epiphany · 2013-06-16T23:18:08.648Z · LW(p) · GW(p)

What if having a proliferation of deadly technologies makes it damned near impossible to figure out who is going to win? That could result in a lot more wars.

Yes. That would be bad.

Now "the great filter" comes to mind again.

Yes. Good point.

Do you know of anyone who has written about: A. Whether it is likely for technological advancement to make it significantly more difficult to figure out who will win wars. B. Whether it's more likely for people to initiate wars when there's a lot of uncertainty.

I do not know, but am interested.

Hmm. I wonder if this situation is comparable to any of the situations we know about.

  1. Clarifies my questions:

    • When humans feel confused about whether they're likely to win a deadly conflict that they would hypothetically initiate, are they more likely to react to that confusion by acknowledging it and avoiding conflict, or by being overconfident / denying the risk / going irrational and taking the gamble?

    • If humans are normally more likely to acknowledge the confusion, what circumstances may make them take a gamble on initiating war?

    • When humans feel confused about whether a competitor has enough power to destroy them, do they react by staying peaceful? The "obvious" answer to this is yes, but it's not good to feel certain about things immediately before even thinking about them. For an example: if animals are backed into a corner by a human, they fight, even despite the obvious size difference. There might be certain situations where a power imbalance triggers the "backed into a corner" instinct. For some ideas about what those situations might be, I'd wonder about situations in which people over-react to confusion by "erring on the side of caution" (deciding that the opponent is a threat) and then initiating war to take advantage of the element of surprise as part of an effort at self-preservation. I would guess that whether people initiate war in this scenario probably has a lot to do with how big the element of surprise advantage is and how quickly they can kill their opponent.

    • Does the imbalance between defense and offense grow over time? If so, would people be more or less likely to initiate conflict if defense essentially didn't exist?

Now I'm thinking about whether we have data that answers these or similar questions.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-06-18T00:06:01.514Z · LW(p) · GW(p)

I think a more important question than "how likely am I to win this conflict?" is "will my odds increase or decrease by waiting?"

comment by Epiphany · 2013-06-16T22:42:06.298Z · LW(p) · GW(p)

But in cases where there is obvious and verifiable failures of communication a little redundancy is a good thing.

Sorry for not seeing this intention. Thanks for your efforts.

because this particular comment is a straw man

Do you mean to say that I intentionally attacked someone with a (either an intentional or unintentional) misinterpretation of their words? Since my intention with the comment referenced just prior to your statement here was an attempt to clarify and in no way an attack, I'm am not sure what comment you're referring to.

comment by cousin_it · 2013-06-14T12:26:36.527Z · LW(p) · GW(p)

Are a few people with killer drones more dangerous than a few people with nukes?

Replies from: WingedViper
comment by WingedViper · 2013-06-14T12:49:09.717Z · LW(p) · GW(p)

Yes they are, because nukes can only be aimed once and then destroy the targets (so they are just a direct threat) while autonomous robots can be used to control all kinds of stuff (checkpoints, roads, certain people). Also they allow much more accurate killing while nukes have a huge area of effect. Also I think (that is speculation, admittedly) that you would need fewer people to control a drone army than nukes of comparable destructive power.

Replies from: ikrase
comment by ikrase · 2013-06-17T22:45:41.286Z · LW(p) · GW(p)

I disagree strongly. (it depends on the size of the drone army, and what sort of people they are.)

Drone army can probably be approximated as a slavishly loyal human army

Terrorists would probably go for the nuke if they thought it achievable.

Rouge states are probably more dangerous with a (large, robust) drone army because it can reduce the ability of a human military to revolt, and possibly do other things.

Replies from: WingedViper
comment by WingedViper · 2013-06-18T10:01:50.847Z · LW(p) · GW(p)

What do you disagree strongly with? My speculation that you would need fewer people to control them? I'm not sure about that so if you can bring in a good argument you can change my view on that.

Terrorists are not our problem (in general and in this specific state). Terrorists with nukes cannot feasably control a country with them.

I am talking about people that have easy access to drones and want to control a country with them. Traditional totalitarian techniques plus drones is what I am really worried about, not terrorists.

So I admit that with "a few people with drones vs. nukes" I thought about a (close to) worst case. Obviously some low tech terrorists in Afghanistan are not a real substantial problem when they control drones, but high military officials with power fantasies are. Of course rouge states with drones are even more dangerous...

Replies from: ikrase
comment by ikrase · 2013-06-18T19:14:01.548Z · LW(p) · GW(p)

I think a rogue state with drones is about as dangerous as a rogue state with a well-equipped army. (note: all of this pretty much assumes something like the next ten to fifty years of physical tech, and that drone AIs are very focused. If AI supertech or extremely tiny deadly drones come into it, it gets much worse.)

I think that drone armies compared to human armies are better for short-term slavish loyalty (including cases where the chain of command is broken). However, unless they are controlled by a stable, central entity (such as the case where a tyrant uses a drone army to suppress rebellion) with all the infrastructure then maintenance and a wide variety of other issues start to become a big problem.

You might get some Napoleons.

I also think that drones are likely to be poor substitutes for infantry out of actual combat.

comment by Emile · 2013-06-14T08:24:37.194Z · LW(p) · GW(p)

it would allow a small number of people to concentrate a very large amount of power

Possibly a smaller number than with soldiers, but not that small - you still need to deal with logistics, maintenance, programming...

it's unthinkable today that American soldiers might suddenly decide to follow a tyrannical leader tomorrow whose goal is to have total power and murder all opponents. It is not, however, unthinkable at all that the same tyrant, if empowered by an army of combat drones, could successfully launch such an attack without risk of mutiny.

It might me a bit more likely, but it stills seems like a very unlikely scenario (0.3% instead of 0.1%?), still staying less likely than other disaster scenarios (breakdown of infrastructure/economy leading in food shortages and panic and riots; a big war starting on one of the less stable parts of the world (ex-Yugoslavia, China//Taiwan, Middle east) and spilling over; an ideological movement motivating a big part of the population into violent action; UFAI; etc.)

EDIT: to expand a bit on this, I don't think replacing soldiers by drones increases risk much all else being equal because the kind of things soldiers would refuse to do are also the kind of things the (current) command structure is unlikely to want to do anyway.

Replies from: Epiphany
comment by Epiphany · 2013-06-14T08:44:09.632Z · LW(p) · GW(p)

Ok let's get some numbers.

I highly doubt that either one of us would be able to accurately estimate how many employees it would require to make a robot army large enough to take over a population, but looking at some numbers will at least give us some perspective. I'll use the USA as an example.

The USA has 120,022,084 people fit for military service according to Wikipedia. (The current military is much smaller, but if there were a takeover in progress, that's the maximum number of hypothetical America soldiers we could have defending the country.)

We'll say that making a robot army takes as many programmers as Microsoft and as many engineers and factory workers as Boeing:

Microsoft employees: 97,811 Boeing employees: 171,700

That's 0.22% of the number of soldiers.

I'm not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%.

Is it possible that 1 in 200 people or so are crazy enough to build and maintain a robot army for a tyrant?

Number of sociopaths: 1 in 20.

And you wouldn't even have to be a sociopath to follow a new Hitler.

I like that you brought up the point that it would take a significant number of employees to make a robot army happen, but I'm not convinced that this makes us safe. This is especially because they could do something like build military robots that are very close to lethal autonomy but not quite, tell people they're making something else, make software to run the basic functions like walking and seeing, and then have a very small number of people make modifications to the hardware and/or software to turn them into autonomous killers.

Of course, once the killer robots are made, then they can just use them to coerce the maintenance and logistics people.

How many employees would have to be aware of their true ambitions? That might be the key question.

Replies from: RolfAndreassen, Randaly
comment by RolfAndreassen · 2013-06-14T21:59:08.501Z · LW(p) · GW(p)

The USA has 120,022,084 people fit for military service according to Wikipedia. (...) That's 0.22% of the number of soldiers.

Excuse me? You are taking the number of military-age males and using it as the number of soldiers! The actual US armed forces are a few million. 5% would be a much better estimate. This aside, you are ignoring that "lethal autonomy" is nowhere near the same thing as "operational autonomy". A Predator drone requires more people to run it - fuelling, arming, polishing the paint - than a fighter aircraft does.

Of course, once the killer robots are made, then they can just use them to coerce the maintenance and logistics people.

How? "Do as I say, or else I'll order you to fire up the drones on your base and have them shoot you!" And while you might credibly threaten to instead order the people on the next base over to fire up their drones, well, now you've started a civil war in your own armed forces. Why will that work better with drones than with rifles?

Again, you are confusing lethal with operational autonomy. A lethally-autonomous robot is just a weapon whose operator is well out of range at the moment of killing. It still has to be pointed in the general direction of the enemy, loaded, fuelled, and launched; and you still have to convince the people doing the work that it needs to be done.

Replies from: gwern, Epiphany, Epiphany
comment by gwern · 2013-06-14T23:14:20.183Z · LW(p) · GW(p)

A Predator drone requires more people to run it - fuelling, arming, polishing the paint - than a fighter aircraft does.

It does? I would've guessed the exact opposite and that the difference would be by a large margin: drones are smaller, eliminate all the equipment necessary to support a human, don't have to be man-rated, and are expected to have drastically less performance in terms of going supersonic or executing high-g maneuvers.

Replies from: Randaly, Epiphany
comment by Randaly · 2013-06-15T23:10:02.134Z · LW(p) · GW(p)

Yes. An F-16 requires 100 support personnel; a Predator 168; a Reaper, 180. Source.

It seems like some but not all of the difference is that manned planes have only a single pilot, whereas UAV's not only have multiple pilots, but also perform much more analysis on recorded data and split the job of piloting up into multiple subtasks for different people, since they are not limited by the need to have only 1 or 2 people controlling the plane.

If I had to guess, some of the remaining difference is probably due to the need to maintain the equipment connecting the pilots to the UAV, in addition to the UAV itself; the most high-profile UAV failure thus far was due to a failure in the connection between the pilots and the UAV.

Replies from: gwern
comment by gwern · 2013-06-16T01:14:57.531Z · LW(p) · GW(p)

I'm not sure that's comparing apples and oranges. From the citation for the Predator figure:

About 168 people are needed to keep a single Predator aloft for 24 hours, according to the Air Force. The larger Global Hawk surveillance drone requires 300 people. In contrast, an F-16 fighter aircraft needs fewer than 100 people per mission.

I'm not sure how long the average mission for an F-16 is, but if it's less than ~12 hours, then the Predator would seem to have a manpower advantage; and the CRS paper cited also specifically says:

In addition to having lower operating costs per flight hour, specialized unmanned aircraft systems can reduce flight hours for fighter aircraft

Replies from: Randaly
comment by Randaly · 2013-06-16T04:06:39.759Z · LW(p) · GW(p)

The F-16 seems to have a maximum endurance of 3-4 hours, so I'm pretty sure its average mission is less than 12 hours.

My understanding was that Rolf's argument depended on the ratio personnel:plane, not on the ratio personnel:flight hour; the latter is more relevant for reconnaissance, ground attack against hidden targets, or potentially for strikes at range, whereas the former is more relevant for air superiority or short range strikes.

Replies from: gwern
comment by gwern · 2013-06-16T17:05:27.096Z · LW(p) · GW(p)

I don't think it saves Rolf's point:

The actual US armed forces are a few million. 5% would be a much better estimate. This aside, you are ignoring that "lethal autonomy" is nowhere near the same thing as "operational autonomy". A Predator drone requires more people to run it - fuelling, arming, polishing the paint - than a fighter aircraft does.

If you are getting >6x more flight-hours out of a drone for 6x for an increased man power of <2x - even if you keep the manpower constant and shrink the size of the fleet to compensate for that <2x manpower penalty, you've still got a new fleet which is somewhere around 6x more lethal. Or you could take the tradeoff even further and have an equally lethal fleet with a small fraction of the total manpower, because each drone goes so much further than its equivalent. So a drone fleet off similar lethality does have more operational autonomy!

That's why per flight hour costs matter - because ultimately, the entire point of having these airplanes is to fly them.

comment by Epiphany · 2013-06-15T01:24:31.197Z · LW(p) · GW(p)

Would you happen to be able to provide these figures:

The ratio of human resources-to-firepower on the current generation of weapons.

The ratio of human resources-to-firepower on the weapons used during eras where oppression was common.

I'd like to compare them.

Hmm, "firepower" is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.

Replies from: gwern
comment by gwern · 2013-06-15T03:30:29.353Z · LW(p) · GW(p)

I have no idea; as I said, my expectations are just guesses based on broad principles (slow planes are cheaper than ultra-fast planes; clunk planes are cheaper than ultra-maneuverable ones; machines whose failure do not immediately kill humans are cheaper to make than machines whose failure do entail human death; the cheapest, lightest, and easiest to maintain machine parts are the ones that aren't there). You should ask Rolf, since apparently he's knowledgeable in the topic.

Replies from: Epiphany
comment by Epiphany · 2013-06-15T04:28:58.632Z · LW(p) · GW(p)

Thanks. I will ask Rolf.

comment by Epiphany · 2013-06-15T04:29:16.324Z · LW(p) · GW(p)

Would you happen to be able to provide these figures:

The ratio of human resources-to-firepower on the current generation of weapons.

The ratio of human resources-to-firepower on the weapons used during eras where oppression was common.

I'd like to compare them.

Hmm, "firepower" is vague. I think the relevant number here would be something along the lines of how many people can be killed or subdued in a conflict situation.

comment by Epiphany · 2013-06-15T01:03:22.227Z · LW(p) · GW(p)

Excuse me? You are taking the number of military-age males and using it as the number of soldiers!

Yes!

The actual US armed forces are a few million. 5% would be a much better estimate.

If the question here is "How many people are currently in the military" my figure is wrong. However, that's not the question. The question is "In the event that a robot army tries to take over the American population, how many American soldiers might there be to defend America?" You're estimating in a different context than the one in my comment.

This aside, you are ignoring that "lethal autonomy" is nowhere near the same thing as "operational autonomy"

Actually, if you're defining "operational autonomy" as "how many people it takes to run weapons", I did address that when I said "I'm not sure how many maintenance people and logistics people it would require, but even if we double that .22%, we still have only .44%." If you have better estimates, would you share them?

How? "Do as I say, or else I'll order you to fire up the drones on your base and have them shoot you!"

Method A. They could wait until the country is in turmoil and prey on people's irrationality like Hitler did.

Method B. They could get those people to operate the drones under the guise of fighting for a good cause. Then they could threaten to use the army to kill anyone who opposes them. This doesn't have to be sudden - it could happen quite gradually, as a series of small and oppressive steps and rules wrapped in doublespeak that eventually lead up to complete tyranny. If people don't realize that most other people disagree with the tyrant, they will feel threatened and probably comply in order to survive.

Method C. Check out the Milgram experiment. Those people didn't even need to be coerced to apply lethal force. It's a lot easier than you think.

Method D. If they can get just a small group to operate a small number of drones, they can coerce a larger group of people to operate more drones. With the larger group of people operating drones, they can coerce even more people, and so on.

Why will that work better with drones than with rifles?

This all depends on the ratio of people it takes to operate the weapons vs. number of people the weapons can subdue. Your perception appears to be that predator drones require more people to run them than a fighter aircraft. My perception is that it doesn't matter how many people it takes to operate a predator drone because war technology is likely to be optimized further than it is today, and if it is possible to decrease the number of people it requires to build/maintain/run/etc. the killer robots significantly below the number of people it would take to get the same amount of firepower otherwise, then of course they can take over a population more easily.

A high firepower to human resource ratio means takeovers would work better.

A lethally-autonomous robot is just a weapon whose operator is well out of range at the moment of killing.

That's not what Suarez says. Even if he's wrong do you deny that it's likely that technology will advance to the point where people can make robots capable of killing without a human making the decision? That's what this conversation is about. Don't let us get all mixed up like Eliezer warns us about in 37 ways words can be wrong. If we're talking about robots that can kill without a human's decision, those are a threat, and could potentially reduce the human resources-to-firepower ratio enough to threaten democracy. If you want to disagree with me about what words I should use to speak about this, that's great. In that case, though, I'd like to know where your credible sources are so that I can read authoritative definitions please.

and you still have to convince the people doing the work that it needs to be done.

Hitler.

Milgram experiment.

Number of sociopaths: 1 in 20.

Is rationality taught in school?: No.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-06-16T02:42:19.109Z · LW(p) · GW(p)

Methods A, B, C and D

What prevents these methods from being used with rifles? What is special about robots in this context?

Even if he's wrong do you deny that it's likely that technology will advance to the point where people can make robots capable of killing without a human making the decision?

No, we already have those. The decision to kill has nothing to do with it. The decisions of where to put the robot, and its ammunition, and the fuel, and everything else it needs, so that it's in a position to make the decision to kill, is what we cannot yet do programmatically. You're confusing tactics and strategy. You cannot run an army without strategic decisionmakers. Robots are not in a position to do that for, I would guess, at least twenty years.

Hitler. Milgram experiment. Number of sociopaths: 1 in 20. Is rationality taught in school?: No.

Ok, so this being so, how come we don't already have oppressive societies being run with plain old rifles?

comment by Randaly · 2013-06-15T23:06:15.768Z · LW(p) · GW(p)

This is implausible. There is no conceivable motive for people to support the hypothetical robot army; there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant, who doesn't have the many trillions of dollars needed to pay them (since nobody has that much money) [1], who is unable to keep secret the millions of people building illegal weaponry for him, and who almost no chance at succeeding even with the robot army, since the US military outspends everybody.

[1]: 1/200 US population average microsoft salary = 150 billion USD. This would require many, many years of work- given how long the military has worked on predators, probably decades. So it would require trillions of dollars.

Also, I don't think you understand sociopathy. The 1/20 figure you cited should be 1/25, which refers to the DSM's "antisocial personality disorder;" sociopathy is a deficit in moral reasoning, which is very different from being a person who's just waiting to become a minion to some dictator.

Replies from: wedrifid, Epiphany
comment by wedrifid · 2013-06-16T06:42:51.266Z · LW(p) · GW(p)

This is implausible. There is no conceivable motive for people to support the hypothetical robot army; there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant, who doesn't have the many trillions of dollars needed to pay them (since nobody has that much money)

For a start, I don't believe you. People have done comparable things for tyrants in the past (complete albeit probably inefficient dedication of the resources of the given tribe to the objectives of the tyrant---horseback archers and small moustaches spring to mind). But that isn't the primary problem here.

The primary problem would be with a country creating the army in the usual way that a country creates an army but that once owned this army would be much easier for an individual (or a few) to control. It makes it easier for such people to become tyrants and once they are to retain their power. This kind of thing (a general seizing control by use of his control of the military) is not unusual for humans. Killer robots make it somewhat easier. Controlling many humans is complicated and unreliable.

comment by Epiphany · 2013-06-16T01:59:24.971Z · LW(p) · GW(p)

there is not a chance in hell that 1.5 million people would voluntarily build a robot army for a tyrant

There are so many ways that a tyrant could end up with a robot army. Don't let's pretend that that's the only way. Here are a few:

  1. A country is in turmoil and a leader comes along who makes people feel hope. The people are open to "lesser evil" propositions and risk-taking because they are desperate. They make irrational decisions and empower the wrong person. Hitler is a real life actual example of this happening.

  2. A leader who is thought of as "good" builds a killer robot army. Then, realizing that they have total power over their people corrupts them and they behave like a tyrant, effectively turning into an oppressive dictator.

  3. Hypothetical scenario: The setting is a country with presidential elections (I choose America for this one). Hypothetically, in this scenario we'll say the technology to do this was completely ready to be exploited. So the government begins to build a killer robot army. Hypothetically, a good president happens to be in office, so people think it's okay. We'll say that president gets a second term. Eight years pass, and a significant killer robot army is created. It's powerful enough to kill every American. Now, it's time to change the president. Maybe the American people choose somebody with their best interests in mind. Maybe they choose a wolf in sheep's clothing, or a moron who doesn't understand the dangers. It's not like we haven't elected morons before and it isn't as if entire countries full of people have never empowered anyone dangerous. I think it's reasonable to say that there's at least a 5% chance that each election will yield either a fatally moronic person, an otherwise good person who is susceptible to being seriously corrupted if given too much power, someone with a tyrant's values/personality, or a sociopath. If you're thinking to yourself "how many times in American history have we seen a president go corrupt by power" consider that there have been checks and balances in place to prevent them from having enough power to be corrupted by. In my opinion, it's likely that most of them would be corrupted by the kind of absolute power that a killer robot army would give them, and 5% is actually quite a low estimate compared with my model of how reality works. But for hypothetical purposes, we'll pretend it's only as high as 5%. We roll the dice on that 5% chance every four years because we hold elections again. If we added those 5% chances up over the course of the rest of my life, we'd end up with it being more likely than not (62.5%) that the wrong person will end up having total control over the country I live in.

Do you see now how a tyrant or other undesirable leader could very conceivably end up heading a killer robot army?

[1]: 1/200 US population average microsoft salary = 150 billion USD. This would require many, many years of work- given how long the military has worked on predators, probably decades. So it would require trillions of dollars.

Thank you. I am very glad for these figures. How long do you think it would take for the US government to build 100 million killer robots?

Also, I don't think you understand sociopathy. The 1/20 figure you cited should be 1/25, which refers to the DSM's "antisocial personality disorder;"

Not sure why we have different numbers but: The statistics for America are different from the statistics for other countries (so, depending on whether your source is aiming for a global figure or local figure, this can vary), the statistic probably changes over time, the DSM changes over time, there are multiple sources on this that probably do not agree, and the 1/20 figure is based on research I did ten years ago, so something in there probably explains it. A 1% difference in prevalence is irrelevant here since (in my extremely amateurish, shoot-from-the-hip estimate "just to get some perspective") if 1 in 200 people are willing to work on the robot army, that's enough -- and 1/25 is obviously significantly larger.

sociopathy is a deficit in moral reasoning, which is very different from being a person who's just waiting to become a minion to some dictator.

Ah, but if you want to be a tyrant you don't need minions who have been dreaming of becoming a minion. Consider this - most people who are employed at a job didn't dream of becoming what they are. There are a lot of people working as cashiers, grocery baggers, doing boring work in a factory, working in telemarketing, etc who dislike and even despise their jobs. Why do they do them? Money, obviously.

Why don't those people turn into drug dealers? They'd make more money that way.

Ethics!

Those people have a sense of right and wrong, or at least are successfully coerced by laws.

People with antisocial personality disorder, the way the DSM defines it, have neither of these properties.

You said yourself above that most people wouldn't want to build a robot army for a tyrant. I agree. But a sociopath doesn't give a rat's behind how they get their money. That is why they are more likely to work for a tyrant - they don't have a conscience and they don't care about the law. If they can make more money assembling killer robots than flipping burgers, there's nothing to stop them from taking the killer robot job.

Taking this into consideration, do you think sociopaths could end up building killer robots?

Replies from: Randaly
comment by Randaly · 2013-06-16T06:05:28.329Z · LW(p) · GW(p)

It seems to me like you're outlining four different scenarios:

1) The United States, or another major power, converts from manned to unmanned weapons of war. A military coup is impossible today because soldiers won't be willing to launch one; were soldiers to be replaced by robots, they could be ordered to.

2) Another state develops unmanned weapons systems which enable it to defeat the United States.

3) A private individual develops unmanned weapons systems which enable them to defeat the United States.

4) Another state which is already a dictatorship develops unmanned weapons systems which alow the dictator to remain in power.

My interpretation of your original comment was that you were arguing for #3; that is the only context in which hiring sociopaths would be relevant, as normal weapons development clearly doesn't require hiring a swarm of sociopathic engineers. The claim that dictatorships exclusively or primarily rely on sociopaths is factually wrong. e.g. according to data from Order Police Battalion 101, 97% of an arbitrary sample of Germans under Hitler were willing to take guns and mow down civilians. Certainly, close to 100% of an arbitrary sample of people would be willing to work on developing robots for either the US or any other state- we can easily see this today.

If you were arguing for #2, then my response would be that the presence of unmanned weapons systems wouldn't make a different one way or another- if we're positing another state able to outdevelop, then defeat, the US, it would presumably be able to do so anyways. The only difference would be if it had an enormous GDP but low population; but such a state would be unlikely to be an aggressive military dictatorship, and, anyways, clearly doesn't exist.

For #4, current dictatorships are too far behind in terms of technological development for unmanned weapons systems to have a significant impact- what we see today is that the most complex weapons systems are produced in a few. mostly stable and democratic nations, and there's good reason to think that democracy is caused by economic and technological development, such that the states that are most able to produce unmanned weapons are also the most likely to already be democratic. (More precisely, are the most likely to be democratic by the time they build enough unmanned weapons, which seems to be decades off at a minimum.) Worst case, there are 1-3 states (Iran, Russia, China[*]) likely to achieve the capacity to build their own unmanned weapons systems without being democracies; and even then, it's questionable whether unmanned weapons systems would be able to do all that much. (It depends on the exact implementation, of course; but in general, no robots can assure the safety of a dictator, and they cannot stop the only way recent Great Power dictatorships have loosened up, by choice of their leaders.)

[*] This is a list of every country that is a dictatorship or quasi-dictatorship that's built it's own fighters, bombers, or tanks, minus Pakistan. I'm very confident that China's government already has enough stability/legitimacy concerns and movement towards democracy that they would implement safeguards. Iran and Russia I give ~50% chances each of doing so.

If you were arguing for #1, then a) the US has well-established procedures for oversight of dangerous weapons (i.e. WMD) which have never failed, b) it would be much easier for the President and a small cabal to gain control using nukes than robots, c) the President doesn't actually have direct control of the military- the ability to create military plans and respond to circumstances for large groups of military robots almost certainly requires AGI, d) as noted separately, there never be a point, pre-AGI, where robots are actually independent of people, e) conspiracies as a general rule are very rarely workable, and this hypothetical conspiracy seems even less workable than most, because it requires many people working together over at least several decades, each person ascending to an elite position.

How long do you think it would take for the US government to build 100 million killer robots?

I don't believe that it ever will. If the US spends 6,000 USD in maintenance per robot, that would eat up the entire US military budget. 6,000$ is almost certainly a severe underestimate of the cost of operating them, by roughly 3 orders of magnitude, and anyways, that neglects a huge number of relevant factors: the cost of purchasing an MQ-1, amortized over a 30-year operating period, is roughly the same per year as the operating cost; money also needs to be spent doing R&D; non-robot fixed costs total at least 6% of the US military budget; much of US military spending is on things like carriers or aerial refueling or transports whose robot equivalents wouldn't be 'killer robots'; etc. (The military budget may go up over time, but the cost per plane has risen faster than the military budget since WWII, so if anything this also argues against large numbers of robots.)

An alternative date: I would expect the USAF to be majority unmanned by 2040 +- 5 years (50% bounds, most uncertainty above); this is roughly one lifecycle of planes forward from today. (Technically it's a fair bit less; but I'd expect development to speed up somewhat.)

I would expect the US Army to deploy unmanned ground combat units in serious numbers by 2035 +- 5 years.

I would expect the USAF to remove humans from the decision making loop on an individual plane's flights in 2045 +- 5 years; on a squadron, including maintenance, command, etc, 2065 +- 5 years; above that, never.

Replies from: Epiphany
comment by Epiphany · 2013-06-17T00:36:29.363Z · LW(p) · GW(p)

How long do you think it would take for the US government to build 100 million killer robots?

I don't believe that it ever will.

Technologies become less expensive over time, and as we progress, our wealth grows. If we don't have the money to produce it at the current cost, that doesn't mean they'll never be able to afford to do it.

If the US spends 6,000 USD in maintenance per robot, that would eat up the entire US military budget. 6,000$ is almost certainly a severe underestimate of the cost of operating them, by roughly 3 orders of magnitude

You didn't specify a time period - should I assume that's yearly? Also, do they have to pay $6,000 in maintenance costs while the units are in storage?

and anyways, that neglects a huge number of relevant factors: the cost of purchasing an MQ-1, amortized over a 30-year operating period, is roughly the same per year as the operating cost; money also needs to be spent doing R&D; non-robot fixed costs total at least 6% of the US military budget; much of US military spending is on things like carriers or aerial refueling or transports whose robot equivalents wouldn't be 'killer robots'; etc. (The military budget may go up over time, but the cost per plane has risen faster than the military budget since WWII, so if anything this also argues against large numbers of robots.)

Okay, so an MQ-1 is really, really expensive. Thank you.

An alternative date: I would expect the USAF to be majority unmanned by 2040 +- 5 years (50% bounds, most uncertainty above); this is roughly one lifecycle of planes forward from today. (Technically it's a fair bit less; but I'd expect development to speed up somewhat.) I would expect the US Army to deploy unmanned ground combat units in serious numbers by 2035 +- 5 years.

What is "serious numbers"?

I would expect the USAF to remove humans from the decision making loop on an individual plane's flights in 2045 +- 5 years; on a squadron, including maintenance, command, etc, 2065 +- 5 years; above that, never.

What do you mean by "above that, never"?

Sorry I didn't get to your other points today. I don't have enough time.

P.S. How did you get these estimates for when unmanned weapons will come out?

comment by Epiphany · 2013-06-14T06:29:20.463Z · LW(p) · GW(p)

Possible Solution: Legislation to ban lethal autonomy. (Suggested by Daniel Suarez, please do not confuse his opinion of whether it is likely to work with mine. I am simply listing it here to encourage discussion and debate.)

Replies from: Epiphany, Epiphany, Epiphany, Epiphany, Epiphany, Epiphany, Epiphany, Epiphany
comment by Epiphany · 2013-06-14T08:11:22.976Z · LW(p) · GW(p)

Pro: Passing a law would probably generate news stories and may make the public more aware of the problem, increasing the chances that someone solves the problem.

comment by Epiphany · 2013-06-14T08:11:16.406Z · LW(p) · GW(p)

Pro: Passing a law is likely to spread the word to the people in the military, some of whom may then have key ideas for preventing issues.

comment by Epiphany · 2013-06-14T08:11:09.787Z · LW(p) · GW(p)

Pro: Passing a law would make it more likely that the legislative branch of the government is aware of the peril it's in.

comment by Epiphany · 2013-06-14T08:11:05.934Z · LW(p) · GW(p)

Pro: This might delay disaster long enough for better solutions to come along.

comment by Epiphany · 2013-06-14T08:11:01.286Z · LW(p) · GW(p)

Con: If the executive branch of the government has the ability to make these weapons, the legislative branch will no longer pose a threat to them. Legally, they'll be forbidden, but practically speaking, they will not be prevented. Laws don't prevent people from behaving badly, nor do they guarantee that bad behavior will be punished, they just specify consequences and define the bad behavior. The consequences are contingent upon whether the person is caught and whether the authorities have enough power to dole out a punishment. In the event that the lawbreaker gains so much power that the authorities can't stop them, the threat of punishment is N/A. A law can't solve the checks and balances issue.

comment by Epiphany · 2013-06-14T06:42:15.655Z · LW(p) · GW(p)

Con: If militaries come to believe that having killer robots is critical to national defense (either because their enemies are posing a major threat, or because they're more effective than other strategies or required as a part of an effective strategy) then they will likely oppose this law or refuse to follow it. Even if they manage to resist the temptation to build them as a contingency plan against risks, if they're ever put into a position where there's an immediate threat (for instance: choosing between death and lethal autonomy), they are likely to choose lethal autonomy. It may be impossible to keep them from using these as a weapon in that case, making the ban on lethal autonomy just another ineffectual rule.

If the consequences of breaking a rule are not as grave as the consequences of following it, then the rule isn't likely to be followed.

comment by Epiphany · 2013-06-14T08:10:48.161Z · LW(p) · GW(p)

Con: They say about banning guns that it doesn't keep the bad people from having weapons, it just keeps good people unarmed. I'm concerned that the same may be true of laws that intentionally reduce the effectiveness of one's warfare technology.

comment by CasioTheSane · 2013-06-20T21:30:53.892Z · LW(p) · GW(p)

The barriers to entry in becoming a supervillan are getting lower and lower- soon just anybody will be able to 3D print an army of flying killer robots with lethal autonomy.

comment by ikrase · 2013-06-17T22:32:29.908Z · LW(p) · GW(p)

I think that the democracy worries are probably overblown. I'd be more worried about skyrocketting collateral damage.

comment by hylleddin · 2013-06-14T19:08:26.142Z · LW(p) · GW(p)

It seems like a well publicized notarious event where a lethally autonomous robot killed a lot of innocent people would significantly broaden the appeal of friendliness research, and even could lead to disapproval of AI technology, similar to how Chernobyl had a significant impact on the current widespread disapproval of nuclear power.

For people primarily interested in existential UFAI risk, the likeliness of such an event may be a significant factor. Other significant factors are:

  • National instability leading to a difficult environment in which to do research

  • National instability leading to reckless AGI research by a group in attempt to gain an advantage over other groups.

Replies from: Pentashagon, Epiphany
comment by Pentashagon · 2013-06-14T23:34:39.187Z · LW(p) · GW(p)

Like this? Interestingly, it's alleged that the autonomous software may not have been the (direct) cause of the failure but that undetected mechanical failure led to the gun continuing to fire without active aiming.

Replies from: hylleddin
comment by hylleddin · 2013-06-15T00:00:26.198Z · LW(p) · GW(p)

Yes, but on a much larger scale.

Or possibly just a more dramatic scale. Three mile island had a significant effect on public opinion even without any obvious death toll.

comment by Epiphany · 2013-06-14T19:39:53.463Z · LW(p) · GW(p)

I sincerely hope that the people have time to think this out before such an event occurs. Otherwise, their reaction may trigger the "cons" posted in the legislation suggestion.

comment by Epiphany · 2013-06-14T06:30:21.932Z · LW(p) · GW(p)

Possible Solution: Using 3-D printers to create self-defense technologies that check and balance power.

Replies from: wedrifid, ikrase, Epiphany, Epiphany
comment by wedrifid · 2013-06-14T14:48:23.026Z · LW(p) · GW(p)

Con: Everybody will probably die. This solution magnifies instability in the system. One person being any one of insane, evil or careless could potentially create an extinction event. At the very least they could cause mass destruction within a country that takes huge efforts to crush.

Replies from: Epiphany
comment by Epiphany · 2013-06-14T17:55:36.454Z · LW(p) · GW(p)

I agree that it's possible that in this scenario everyone will die but I am not sure why you seem to think it is the most likely outcome. Considering the fact that governments will probably have large numbers of these, or comparable weapons before the people do, or that they will create comparable weapons in the event that they observe their populace building weapons using 3-D printers, I think it's more likely that the power that the people wield via killer robots (including criminal organizations) will be kept in check than that any of these groups will be able to rove around and kill everyone. Perhaps you envision a more complex chain of events unfolding? Do you expect a clusterfuck? Or is there some other course that you think things would take? What and why?

Replies from: wedrifid
comment by wedrifid · 2013-06-15T15:43:11.537Z · LW(p) · GW(p)

I agree that it's possible that in this scenario everyone will die but I am not sure why you seem to think it is the most likely outcome.

We are considering a scenario where technology has been developed and disseminated sufficiently to allow Joe Citizen to produce autonomous killer robots with his home based general purpose automated manufacturing device. People more intelligent, educated, resourceful and motivated than Joe Citizen are going to be producing things even more dangerous. And produce things that produce things that... I just assume that kind of environment is not stable.

Replies from: Epiphany
comment by Epiphany · 2013-06-16T00:56:06.460Z · LW(p) · GW(p)

Ok, so it's not the killer robots you envision killing off humanity, it's the other technologies that would likely be around at that time, and/or the whole mixture of insanity put together?

Replies from: wedrifid
comment by wedrifid · 2013-06-16T05:26:01.896Z · LW(p) · GW(p)

Ok, so it's not the killer robots you envision killing off humanity, it's the other technologies that would likely be around at that time, and/or the whole mixture of insanity put together?

In particular the technologies being used to create killer robots and so necessarily around at the time. Sufficiently general small scale but highly complex manufacturing capability combined with advanced mobile automation. The combination is already notorious).

Replies from: Epiphany
comment by Epiphany · 2013-06-17T00:10:07.728Z · LW(p) · GW(p)

You know, we've invented quite a few weapons over time and have survived quite a few "replicators" (the black death will be my #1 example)... we're not dead yet and I'm wondering if there are some principles keeping us alive which you and I have overlooked.

For a shot at what those could be:

1) Regarding self-replicators:

  • Self-replicators make near perfect copies of themselves and so they are optimized to work in most, but not all situations. This means that there's a very good chance that at least some of a given species will survive whatever the self-replicators are doing.

  • Predators strike prey as terrifying, but their weakness is that they depend on the prey. Predators of all kinds die when they run out of prey. Some prey probably always hides, so unless the predator is really intelligent, it is likely that some prey will survive and will get a break from the predators, which they can use to develop strategies.

2) Regarding weapons:

  • For this discussion, we've been talking almost exclusively about offensive weapons. However, governments create defenses as well - probably, they often do this with the intent of countering their own offensive weapons. I don't know much about what sorts of defensive weapons there could be in the future, do you? If not, this lack of info about defensive weapons might be causing us to exaggerate the risk of offensive weapons.

  • Governments must value defense, or else they would not invest in it and would instead take those resources and put them into offense. Looking at it this way, I realize that offense is slowed down by defense, and/or there may be a certain ratio of defensive power to offensive power that is constantly maintained due to the fact that it's an intelligent agent that's creating these and they're motivated to have both offense and defense. If defense keeps pace with offense for this or any other reason (maybe reasons having to do with the insights that technological advancement provides) then there may be far less risk than we're perceiving.

  • If we reach maximum useful offense (I'll roughly define this as the ability to destroy every person or autonomous weapon who is threat in the world instantly and with specific targeting capabilities) there will be no point in focusing on offensive weapons anymore. If maximum useful offense is reached, (or perhaps an even earlier point... maybe one where the offensive capabilities of the enemy are too harrowing and your own are overkill) then this would be the point at which that balance in what we focus on would likely shift. By focusing primarily or solely on defense, we could enter an era where war is infeasible. Though after all the factors that would have a lasting effect on whether it was easier to make progress in defense or offense faded (such as factories to build defensive items or laborers trained in defense) we'd be back to square one. But a "defense era" might give us time to solve the problem - after we have all woken up to how critical it is, and also have specifics on the situation.

comment by ikrase · 2013-06-17T22:53:31.411Z · LW(p) · GW(p)

You people have got to get over your 3d printer obsessions. The effect is minimal. A person capable of building actually dangerous drones would just use lathes and mills.

comment by Epiphany · 2013-06-14T08:26:45.526Z · LW(p) · GW(p)

Pro: Checking and balancing power is a solution we've used in the past. We know that it can work.

comment by Epiphany · 2013-06-14T06:48:15.320Z · LW(p) · GW(p)

Con: If power were checked and balanced perfectly, right from the beginning, then stasis would be maintained. However, this may not be what's likely. We may see a period full of power struggles where large numbers of people are unprotected and factions like organized crime groups, oppressive governments or citizens with tyrannical ambitions rise up and behave as feudal lords.

Replies from: Kawoomba
comment by Kawoomba · 2013-06-14T07:17:29.717Z · LW(p) · GW(p)

Is this like a one-woman topic, complete with discussion? A finished product?

I think 3-D printers that counterbalance death from above are ... a ways off.

Replies from: wedrifid, Epiphany
comment by wedrifid · 2013-06-14T14:36:14.502Z · LW(p) · GW(p)

Is this like a one-woman topic, complete with discussion? A finished product?

Or perhaps it is merely a different way of formatting a discussion post, with the evident intention of making it easier to organise replies. As an experimental posting style this solution has, shall we say, pros and cons.

comment by Epiphany · 2013-06-14T07:22:07.273Z · LW(p) · GW(p)

No. It just looks that way because I just started it. Please contribute your thoughts.

comment by CronoDAS · 2013-06-14T07:00:23.865Z · LW(p) · GW(p)

Don't there exist weapons that already exhibit the property of "lethal autonomy" - namely, land mines?

Replies from: Epiphany, wedrifid, JoshuaFox
comment by Epiphany · 2013-06-14T07:08:53.099Z · LW(p) · GW(p)

That's not even comparable. Consider this:

  • Land mines don't distinguish between your allies and your enemies.
  • Land mines don't move and people can avoid them.

Unless your enemy is extremely small and/or really terrible at strategy, you can't win a war with land mines. On the other hand, these killer robots can identify targets, could hunt people down by tracking various bits of data (transactions, cell phone signals, etc), could follow people around using surveillance systems, and can distinguish between enemies and allies. With killer robots, you could conceivably win a war.

comment by wedrifid · 2013-06-14T14:27:27.082Z · LW(p) · GW(p)

Don't there exist weapons that already exhibit the property of "lethal autonomy" - namely, land mines?

Basically, no. Being a trigger that blows up when stepped on isn't something that can realistically be called autonomy.

Replies from: CronoDAS
comment by CronoDAS · 2013-06-15T01:25:34.834Z · LW(p) · GW(p)

::points to exhibit of plucked chicken wearing "I'm a human!" sign::

Well, yeah, it's a far cry from killer robots, but once a mine is planted, who dies and when is pretty much entirely out of the hands of the person who planted it. And there are indeed political movements to ban the use of land mines, specifically because of this lack of control; land mines have a tendency to go on killing people long after the original conflict is over. So land mines and autonomous killer robots do share at least a few problematic aspects; could a clever lawyer make a case that a ban on "lethal autonomy" should encompass land mines as well?

A less silly argument could also be directed at already-banned biological weapons; pathogens reproduce and kill people all the time without any human intervention at all. Should we say that anthrax bacteria lack the kind of autonomy that we imagine war-fighting robots would have?

Replies from: Epiphany, wedrifid
comment by Epiphany · 2013-06-15T04:35:19.865Z · LW(p) · GW(p)

Now I'm not sure whether you were (originally) trying to start a discussion about how the term "lethal autonomy" should be used, or if you intended to imply something to the effect of "lethal autonomy isn't a new threat, therefore we shouldn't be concerned about it".

Even if I was wrong in my interpretation of your message, I'm still glad I responded the way I did - this is one of those topic where it's best if nobody finds excuses to go into denial, default to optimism bias, or otherwise fail to see the risk.

Do you view lethally autonomous robots as a potential threat to freedom and democracy?

Replies from: CronoDAS
comment by CronoDAS · 2013-06-15T04:38:33.832Z · LW(p) · GW(p)

I dunno. I'm just a compulsive nitpicker.

Replies from: Epiphany
comment by Epiphany · 2013-06-16T00:52:48.228Z · LW(p) · GW(p)

Lol. Well thank you for admitting this.

comment by wedrifid · 2013-06-15T15:04:00.305Z · LW(p) · GW(p)

Should we say that anthrax bacteria lack the kind of autonomy that we imagine war-fighting robots would have?

Yes. But I wouldn't expect it to come up too often as a sincere question.

comment by JoshuaFox · 2013-06-14T10:29:51.476Z · LW(p) · GW(p)

Or the pit-trap: Lethal autonomy that goes back to the Stone Age :-)

Replies from: CronoDAS
comment by CronoDAS · 2013-06-15T01:17:16.356Z · LW(p) · GW(p)

And deliberately set wildfires.

comment by Epiphany · 2013-06-14T21:50:46.003Z · LW(p) · GW(p)

Possible Solution:

This sounds hard to implement because it would require co-operation from a lot of people, but if the alternative is that our technological progress means we are facing possible extinction (with the 3-D printer solution) or oppression (with the legislation "solution"), that might get most of the world interested in putting the effort into it.

Here's how I imagine it could work:

  1. First, everyone concerned forms an alliance. This would have to be a very big alliance all over the world.

  2. The alliance makes distinctions between weapons likely to threaten humanity and weapons likely to protect humanity from itself. For an over-simplified definition (to give you the gist):

    Weapons that are likely to protect humanity from itself: These are not useful for dominating a population unless a very, very large number of people use them. Example: Guns.

    Weapons that threaten humanity: Any weapon that could be used by a few to dominate many. (And there are more kinds.)

  3. The alliance makes a law saying that anyone found building humanity-threatening weapons will be stopped by the alliance. (We're kind of doing this already, but the next part is different).

  4. In order to make such a policy enforceable, somebody will need to have some weapons with which to control those who break the weapon law the way that we have police with guns. Selecting a small number of people to police the illegal weapon makers and giving the police weapons powerful enough to stop them would make the problem far, far worse. That would be adopting the same failure mode as Russian communism: Give all the power to a few and expect them to distribute their power to the many? Distribution is not the outcome we should anticipate. See the next point for how I think we could do this.

  5. Design only weapons meant to protect humanity from itself, and ensure that a very large percentage of the world's populations have these weapons (so that they aren't in the hands of the few). The reason they can overwhelm the potentially superior weapons is because there are so many more weapons being wielded. Essentially, everybody becomes the police.