AI Alignment Podcast: On Lethal Autonomous Weapons with Paul Scharre

post by Palus Astra · 2020-03-16T23:00:17.664Z · LW · GW · 0 comments

Contents

  Transcript: 
None
No comments

Most relevant to AI alignment, and a pertinent question to focus on for interested readers/listeners is: if we are are unable to establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill humans, then what effects will this have on and can we still deal with more subtle short-term alignment considerations and long-term AI x-risk?


Podcast page and audio: https://futureoflife.org/2020/03/16/on-lethal-autonomous-weapons-with-paul-scharre/

Transcript:

Lucas Perry: Welcome to the AI Alignment Podcast. I’m Lucas Perry. Today’s conversation is with Paul Scharre and explores the issue of lethal autonomous weapons. And so just what is the relation of lethal autonomous weapons and the related policy and governance issues to AI alignment and long-term AI risk? Well there’s a key question to keep in mind throughout this entire conversation and it’s that: if we cannot establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill, then how can we deal with more subtle near term issues and eventual long term safety issues about AI systems? This question is aimed at exploring the idea that autonomous weapons and their related governance represent a possibly critical first step on the international cooperation and coordination of global AI issues. If we’re committed to developing beneficial AI and eventually beneficial AGI then how important is this first step in AI governance and what precedents and foundations will it lay for future AI efforts and issues? So it’s this perspective that I suggest keeping in mind throughout the conversation. And many thanks to FLI’s Emilia Javorsky for much help on developing the questions for this podcast. 

Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is the award-winning author of Army of None: Autonomous Weapons and the Future of War, which won the 2019 Colby Award and was named one of Bill Gates’ top five books of 2018.

Mr. Scharre worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. Mr. Scharre led the DoD working group that drafted DoD Directive 3000.09, establishing the Department’s policies on autonomy in weapon systems. Mr. Scharre also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance (ISR) programs and directed energy technologies. He was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and Secretary-level planning guidance. His most recent position was Special Assistant to the Under Secretary of Defense for Policy. Prior to joining the Office of the Secretary of Defense, Mr. Scharre served as a special operations reconnaissance team leader in the Army’s 3rd Ranger Battalion and completed multiple tours to Iraq and Afghanistan.

The Future of Life Institute is a non-profit and this podcast is funded and supported by listeners like you. So if you find what we do on this podcast to be important and beneficial, please consider supporting the podcast by donating at futureoflife.org/donate. If you support any other content creators via services like Patreon, consider viewing a regular subscription to FLI in the same light. You can also follow us on your preferred listening platform, like on Apple Podcasts or Spotify, by searching for us directly or following the links on the page for this podcast found in the description.

And with that, here’s my conversion with Paul Scharre. 

All right. So we’re here today to discuss your book, Army of None, and issues related to autonomous weapons in the 21st century. To start things off here, I think we can develop a little bit of the motivations for why this matters. Why should the average person care about the development and deployment of lethal autonomous weapons?

Paul Scharre: I think the most basic reason is because we all are going to live in the world that militaries are going to be deploying future weapons. Even if you don’t serve in the military, even if you don’t work on issues surrounding say, conflict, this kind of technology could affect all of us. And so I think we all have a stake in what this future looks like.

Lucas Perry: Let’s clarify a little bit more about what this technology actually looks like then. Often in common media, and for most people who don’t know about lethal autonomous weapons or killer robots, the media often portrays it as a terminator like scenario. So could you explain why this is wrong, and what are more accurate ways of communicating with the public about what these weapons are and the unique concerns that they pose?

Paul Scharre: Yes, I mean, the Terminator is like the first thing that comes up because it’s such a common pop culture reference. It’s right there in people’s minds. So I think go ahead and for the listeners, imagine that humanoid robot in the Terminator, and then just throw that away, because that’s not what we’re talking about. Let me make a different comparison. Self-driving cars. We are seeing right now the evolution of automobiles that with each generation of car incorporate more autonomous features: parking, intelligent cruise control, automatic braking. These increasingly autonomous features in cars that are added every single year, a little more autonomy, a little more autonomy, are taking us down at some point in time to a road of having fully autonomous cars that would drive themselves. We have something like the Google car where there’s no steering wheel at all. People are just passengers along for the ride. We’re seeing something very similar happen in the military with each generation of robotic systems and we now have air and ground and undersea robots deployed all around the world in over 100 countries and non state groups around the globe with some form of drones or robotic systems, and with each generation they’re becoming increasingly autonomous.

Now, the issue surrounding autonomous weapons is, what happens when a predator drone has as much autonomy as a self-driving car? What happens when you have a weapon that’s out in the battlefield, and it’s making its own decisions about whom to kill? Is that something that we’re comfortable with? What are the legal and moral and ethical ramifications of this? And the strategic implications? What might they do for the balance of power between nations, or stability among countries? These are really the issues surrounding autonomous weapons, and it’s really about this idea that we might have, at some point of time and perhaps the not very distant future, machines making their own decisions about whom to kill on the battlefield.

Lucas Perry: Could you unpack a little bit more about what autonomy really is or means because it seems to me that it’s more like an aggregation of a bunch of different technologies like computer vision and image recognition, and other kinds of machine learning that are aggregated together. So could you just develop a little bit more about where we are in terms of the various technologies required for autonomy?

Paul Scharre: Yes, so autonomy is not really a technology, it’s an attribute of a machine or of a person. And autonomy is about freedom. It’s the freedom that a machine or a person is given to perform some tasks in some environment for some period of time. As people, we have very little autonomy as children and more autonomy as we grow up, we have different autonomy in different settings. In some work environments, there might be more constraints put on you; what things you can and cannot do. And it’s also environment-specific and task-specific. You might have autonomy to do certain things, but not other things. It’s the same with machines. We’re ultimately talking about giving freedom to machines to perform certain actions under certain conditions in certain environments.

There are lots of simple forms of autonomy that we interact with all the time that we sort of take for granted. A thermostat is a very simple autonomous system, it’s a machine that’s given a freedom to decide… decide, let’s put that in air quotes, because we come back to what it means for machines to decide. But basically, the thermostat is given the ability to turn on and off the heat and air conditioning based on certain parameters that a human sets, a desired temperature, or if you have a programmable thermostat, maybe the desired temperature at certain times a day or days of the week, is a very bounded kind of autonomy. And that’s what we’re talking about for any of these machines. We’re not talking about freewill, or whether the machine develops consciousness. That’s not a problem today, maybe someday, but certainly not with the machines we’re talking about today. It’s a question really of, how much freedom do we want to give machines, or in this case, weapons operating on the battlefield to make certain kinds of choices?

Now we’re still talking about weapons that are designed by people, built by people, launched by people, and put into the battlefields to perform some mission, but there might be a little bit less human control than there is today. And then there are a whole bunch of questions that come along with that, like, is it going to work? Would it be effective? What happens if there are accidents? Are we comfortable with seeding that degree of control over to the machine?

Lucas Perry: You mentioned the application of this kind of technology in the context of battlefields. Is there also consideration and interest in the use of lethal autonomous weapons in civilian contexts?

Paul Scharre: Yes, I mean, I think there’s less energy on that topic. You certainly see less of a poll from the police community. I mean, I don’t really run into people in a police or Homeland Security context, saying we should be building autonomous weapons. Well, you will hear that from militaries. Oftentimes, groups that are concerned about the humanitarian consequences of autonomous weapons will raise that as a concern. There’s both what might militaries do in the battlefield, but then there’s a concern about proliferation. What happens when the technology proliferates, and it’s being used for internal security issues, could be a dictator, using these kinds of weapons to repress the population. That’s one concern. And that’s, I think, a very, very valid one. We’ve often seen one of the last checks against dictators, is when they tell their internal security forces to fire on civilians, on their own citizens. There have been instances where the security forces say, “No, we won’t.” That doesn’t always happen. Of course, tragically, sometimes security forces do attack their citizens. We saw in the massacre in Tiananmen Square that Chinese military troops are willing to murder Chinese citizens. But we’ve seen other instances, certainly in the fall of the Eastern Bloc at the end of the Cold War, that security forces… these are our friends, these are our family. We’re not going to kill them.

And autonomous weapons could take away one of those checks on dictators. So I think that’s a very valid concern. And that is a more general concern about the proliferation of military technology into policing even here in America. We’ve seen this in the last 20 years, is a lot of military tech ends up being used by police forces in ways that maybe isn’t appropriate. And so that’s, I think, a very valid and legitimate sort of concern about… even if this isn’t kind of the intended use, what would that look like and what are the risks that could come with that, and how should we think about those kinds of issues as well?

Lucas Perry: All right. So we’re developing autonomy in systems and there’s concern about how this autonomy will be deployed in context where lethal force or force may be used. So the question then arises and is sort of the question at the heart of lethal autonomous weapons: Where is it that we will draw a line between acceptable and unacceptable uses of artificial intelligence in autonomous weapons or in the military, or in civilian policing? So I’m curious to know how you think about where to draw those lines or that line in particular, and how you would suggest to any possible regulators who might be listening, how to think about and construct lines of acceptable and unacceptable uses of AI.

Paul Scharre: That’s a great question. So I think let’s take a step back first and sort of talk about, what would be the kinds of things that would make uses acceptable or unacceptable. Let’s just talk about the military context just to kind of bound the problem for a second. So in the military context, you have a couple reasons for drawing lines, if you will. One is legal issues, legal concerns. We have a legal framework to think about right and wrong in war. It’s called the laws of war or international humanitarian law. And it lays out a set of parameters for what is acceptable and what… And so that’s one of the places where there has been consensus internationally, among countries that come together at the United Nations through the Convention on Certain Conventional Weapons, the CCW, the process, we’ve had conversations going on about autonomous weapons.

One of the points of consensus among nations is that existing international humanitarian law or the laws of war would apply to autonomous weapons. And that any uses of autonomy in weapons, those weapons have to be used in a manner that complies with the laws of war. Now, that may sound trivial, but it’s a pretty significant point of agreement and it’s one that places some bounds on things that you can or cannot do. So, for example, one of the baseline principles of the laws of war is the principle of distinction. Military forces cannot intentionally target civilians. They can only intentionally target other military forces. And so any use of force these people to comply with this distinction, so right off the bat, that’s a very important and significant one when it comes to autonomous weapons. So if you have to use a weapon that could not be used in a way to comply with this principle of distinction, it would be illegal under the laws war and you wouldn’t be able to build it.

And there are other principles as well, principles about proportionality, and ensuring that any collateral damage that affects civilians or civilian infrastructure is not disproportionate to the military necessity of the target that is being attacked. There are principles about avoiding unnecessary suffering of combatants. Respecting anyone who’s rendered out of combat or the appropriate term is “hors de combat,” who surrendered have been incapacitated and not targeting them. So these are like very significant rules that any weapon system, autonomous weapon or not, has to comply with. And any use of any weapon has to comply with, any use of force. And so that is something that constrains considerably what nations are permitted to do in a lawful fashion. Now do people break the laws of war? Well, sure, that happens. We’re seeing that happen in Syria today, Bashar al-Assad is murdering civilians, there are examples of Rogue actors and non state terrorist groups and others that don’t care about respecting the laws of war. But those are very significant bounds.

Now, one could also say that there are more bounds that we should put on autonomous weapons that might be moral or ethical considerations that exist outside the laws of war, that aren’t written down in a formal way in the laws of war, but they’re still important and I think those often come to the fore with this topic. And there are other ones that might apply in terms of reasons why we might be concerned about stability among nations. But the laws of war, at least a very valuable starting point for this conversation about what is acceptable and not acceptable. I want to make clear, I’m not saying that the laws of war are insufficient, and we need to go beyond them and add in additional constraints. I’m actually not saying that. There are people that make that argument, and I want to give credit to their argument, and not pretend it doesn’t exist. I want the listeners to sort of understand the full scope of arguments about this technology. But I’m not saying myself that’s the case necessarily. But I do think that there are concerns that people raise.

For example, people might say it’s wrong for a machine to decide whom to kill, it’s wrong for a machine to make the decision about life and death. Now I think that’s an interesting argument. Why? Why is it wrong? Is it because we think the machine might get the answer wrong, it might perform not as well as the humans because I think that there’s something intrinsic about weighing the value of life and death that we want humans to do, and appreciating the value of another person’s life before making one of these decisions. Those are all very valid counter arguments that exist in this space.

Lucas Perry: Yes. So thanks for clarifying that. For listeners, it’s important here to clarify the difference where some people you’re saying would find the laws of war to be sufficient in the case of autonomous weapons, and some would not.

Paul Scharre: Yes, I mean, this is a hotly debated issue. I mean, this is in many ways, the crux of the issue surrounding autonomous weapons. I’m going to oversimplify a bit because you have a variety of different views on this, but you certainly have some people whose views are, look, we have a set of structures called the laws of war that tell us what right and wrong looks like and more. And most of the things that people are worried about are already prohibited under the laws of war. So for example, if what you’re worried about is autonomous weapons, running amok murdering civilians, that’s illegal under the laws of war. And so one of the points of pushback that you’ll sometimes get from governments or others to the idea of creating like an ad hoc treaty that would ban autonomous weapons or some class of autonomous weapons, is look, some of the things people worry about like they’re already prohibited under the laws of war, passing another law to say the thing that’s already illegal is now illegal again doesn’t add any value.

There’s group of arguments that says the laws of war dictate effects in the battlefield. So they dictate sort of what the end effect is, they don’t really affect the process. And there’s a line of reasoning that says, that’s fine. The process doesn’t matter. If someday we could use autonomous weapons in a way that was more humane and more precise than people, then we should use them. And just the same way that self-driving cars will someday save lives on roads by avoiding accidents, maybe we could build autonomous weapons that would avoid mistakes in war and accidentally targeting civilians, and therefore we should use them. And let’s just focus on complying better with the laws of war. That’s one school of thought.

Then there’s a whole bunch of reasons why you might say, well, that’s not enough. One reason might be, well, militaries’ compliance with the laws of war. Isn’t that great? Actually, like people talk a good game, but when you look at military practice, especially if the rules for using weapon are kind of convoluted, you have to take a bunch of additional steps in order to use it in a way that’s lawful, that kind of goes out the window in conflict. Real world and tragic historical example of this was experienced throughout the 20th century with landmines where land mines were permitted to be used lawfully, and still are, if you’re not a signatory to the Ottawa Convention, they’re permitted to be used lawfully provided you put in a whole bunch of procedures to make sure that minefields are marked and we know the location of minefields, so they can be demined after conflict.

Now, in practice, countries weren’t doing this. I mean, many of them were just scattering mines from the air. And so we had this horrific problem of millions of mines around the globe persisting after a conflict. The response was basically this global movement to ban mines entirely to say, look, it’s not that it’s inconceivable to use mines in a way that you mean, but it requires a whole bunch of additional efforts, that countries aren’t doing, and so we have to take this weapon away from countries because they are not actually using it in a way that’s responsible. That’s a school of thought with autonomous weapons. Is look, maybe you can conjure up thought experiments about how you can use autonomous weapons in these very specific instances, and it’s acceptable, but once you start any use, it’s a slippery slope, and next thing you know, it’ll be just like landmines all over again, and they’ll be everywhere and civilians will be being killed. And so the better thing to do is to just not let this process even start, and not letting militaries have access to the technology because they won’t use it responsibly, regardless of whether it’s theoretically possible. That’s a pretty reasonable and defensible argument. And there are other arguments too.

One could say, actually, it’s not just about avoiding civilian harm, but there’s something intrinsic about weighing the value of an enemy soldier’s life, that we want humans involved in that process. And that if we took humans away from that process, we’ll be losing something that sure maybe it’s not written down in the laws of war, but maybe it’s not written down because it was always implicit that humans will always be making these choices. And now that it’s decision in front of us, we should write this down, that humans should be involved in these decisions and should be weighing the value of the human life, even an enemy soldier. Because if we give that up, we might give up something that is a constraint on violence and war that holds back some of the worst excesses of violence, we might even can make something about ourselves. And this is, I think, a really tricky issue because there’s a cost to humans making these decisions. It’s a very real cost. It’s a cost in post traumatic stress that soldiers face and moral injury. It’s a cost in lives that are ruined, not just the people that are killed in a battlefield, but the people have to live with that violence afterwards, and the ramifications and even the choices that they themselves make. It’s a cost in suicides of veterans, and substance abuse and destroyed families and lives.

And so to say that we want humans to stay still evolved to be more than responsible for killing, is to say I’m choosing that cost. I’m choosing to absorb and acknowledge and take on the cost of post traumatic stress and moral injury, and also the burdens that come with war. And I think it’s worth reflecting on the fact that the burdens of war are distributed very unequally, not just between combatants, but also on the societies that fight. As a democratic nation in the United States, we make a decision as a country to go to war, through our elected representatives. And yet, it’s a very tiny slice of the population that bears the burden for that war, not just putting themselves at risk, but also carrying the moral burden of that afterwards.

And so if you say, well, I want there to be someone who’s going to live with that trauma for the rest of your life. I think that’s an argument that one can make, but you need to acknowledge that that’s real. And that’s not a burden that we all share equally, it’s a burden we’re placing on young women and men that we send off to fight on our behalf. The flip side is if we didn’t do that, if we fought a war and no one felt the moral burden of killing, no one slept uneasy at night afterwards, what would they say about us as a society? I think these are difficult questions. I don’t have easy answers to that. But I think these are challenging things for us to wrestle with.

Lucas Perry: Yes, I mean, there’s a lot there. I think that was a really good illustration of the different points of views on this. I hadn’t heard or considered much the implications of post traumatic stress. And I think moral burden, you called it that would be a factor in what autonomous weapons would relieve in countries which have the power to develop them. Speaking personally, I think I find the arguments most compelling about the necessity of having human beings integrated in the process of decision making with regards to killing, because if you remove that, then you’re removing the deep aspect of humanity, which sometimes does not follow the laws of war, which we currently don’t have complex enough preference learning techniques and machine learning techniques to actually train autonomous weapon systems in everything that human beings value and care about, and that there are situations where deviating from following the laws of war may be the best thing to do. I’m not sure if you have any thoughts about this, but I think you did a good job of illustrating all the different positions, and that’s just my initial reaction to it.

Paul Scharre: Yes, these are tricky issues. And so I think one of the things I want to try to do for listeners is try to lay out the landscape of what these arguments are, and some of the pros and cons of them because I think sometimes they will often oversimplify on all sides. The other people will be like, well, we should have humans involved in making these decisions. Well, humans involved where? If I get into a self-driving car that has no steering wheel, it’s not true that there’s no human involvement. The type of human involvement has just changed in terms of where it exists. So now, instead of manually driving the car, I’m still choosing the car’s destination, I’m still telling the car where I want to go. You’re going to get into the car and car take me wherever you want to go. So the type of human involvement is changed.

So what kind of human relationship do we want with decisions about life and death in the battlefield? What type of human involvement is right or necessary or appropriate and for what reason? For a legal reason, for a moral reason. These are interesting challenges. We haven’t had to confront anymore. These arguments I think unfairly get simplified on all sides. Conversely, you hear people say things like, it doesn’t matter, because these weapons are going to get built anyway. It’s a little bit overly simplistic in the sense that there are examples of successes in arms control. It’s hard to pull off. There are many examples of failures as well, but there are places where civilized nations have walked back from some technologies to varying degrees of success, whether it’s chemical weapons or biological weapons or other things. So what is success look like in constraining a weapon? Is it no one ever uses the weapon? Is it most nations don’t use it? It’s not used in certain ways. These are complicated issues.

Lucas Perry: Right. So let’s talk a little bit here about integrating human emotion and human reasoning and humanity itself into the autonomous weapon systems and the life or death decisions that they will be making. So hitting on a few concepts here, if you could help explain what people mean when they say human in the loop, and human on the loop, and how this relates to the integration of human control and human responsibility and human accountability in the use of autonomous weapons.

Paul Scharre: Let’s unpack some of this terminology. Broadly speaking, people tend to use the terms human in the loop, on the loop, or out of the loop to refer to semi autonomous weapons human is in the loop, which means that for any really semi autonomous process or system, the machine is taking an action and then it pauses and waits for humans to take a positive action before proceeding. A good example of a human in the loop system is the automated backups on your computer when they require you to push a button to say okay to do the backup now. They’re waiting music in action before proceeding. In a human on the loop system, or one where the supervisor control is one of the human doesn’t have to take any positive action for the system to proceed. The human can intervene, so the human can sit back, and if you want to, you can jump in.

Example of this might be your thermostat. When you’re in a house, you’ve already set the parameters, it’ll turn on the heat and air conditioning on its own, but if you’re not happy with the outcome, you could change it. Now, when you’re out of the house, your thermostat is operating in a fully autonomous fashion in this respect where humans out of the loop. You don’t have any ability to intervene for some period of time. It’s really all about time duration. For supervisory control, how much time does the human have to identify something is wrong and then intervene? So for example, things like the Tesla autopilots. That’s one where the human is in a supervisory control capacity. So the autopilot function in a car, the human doesn’t have to do anything, car’s driving itself, but they can intervene.

The problem with some of those control architectures is the time that you are permitting people to identify that there’s a problem, figure out what’s going on, decide to take action, intervene, really realistic before harm happens. Is it realistic that a human can be not paying attention, and then all of a sudden, identify that the car is in trouble and leap into action to avoid an accident when you’re speeding on the highway 70 miles an hour? And then you can see quite clearly in a number of fatal accidents with these autopilots, that that’s not feasible. People actually aren’t capable of doing that. So you’ve got to think about sort of what is the role of the human in this process? This is not just a semi autonomous or supervised autonomous or fully autonomous process. It’s one where the human is involved in some varying capacity.

And what are we expecting the human to do? Same thing with something that’s fully autonomous. We’re talking about a system that’s operating on its own for some period of time. How long before it checks back in with a person? What information is that person given? And what is their capacity to intervene or how bad could things go wrong when the person is not involved? And when we talk about weapons specifically. There are lots of weapons that operate in a semi autonomous fashion today where the human is choosing the target, but there’s a lot of automation in IDing targets presenting information to people in actually carrying out an attack, once the human has chosen a target, there are many, many weapons that are what the military calls fire and forget weapon, so once it’s launched, it’s not coming back. Those have been widely used for 70 years since World War Two. So that’s not new.

There are a whole bunch of weapons that operate in a supervisory autonomy mode, where humans on the loop. These are generally used in a more limited fashion for immediate localized defense of air bases or ships or ground vehicles defending against air or missile or rocket attack, particularly when the speed of these attacks might overwhelm people’s ability to respond. For humans to be in the loop, for humans to push a button, every time there’s a missile coming in, you could have so many missiles coming in so fast that you have to just simply activate an automatic defensive mode that will shoot down all have the missiles based on some pre-programmed parameters that humans put into the system. This exists today. The systems have been around for decades since the 1980s. And there were widespread use with at least 30 countries around the globe. So this is a type of weapon system that’s already in operation. These supervisory autonomous weapons. What really would be new would be fully autonomous weapons that operate on their own, whereas humans are still building them and launching them, but humans put them into operation, and then there’s some period of time where they were able to search a target area for targets and they were able to find these targets, and then based on some programming that was designed by people, identify the targets and attack them on their own.

Lucas Perry: Would you consider that out of the loop for that period of time?

Paul Scharre: Exactly. So over that period of time, humans are out of the loop on that decision over which targets they’re attacking. That would be potentially largely a new development in war. There are some isolated cases of some weapon systems that cross this line, by in large that would be new. That’s at least the starting point of what people might be concerned about. Now, you might envision things that are more advanced beyond that, but that’s sort of the near term development that could be on the horizon in the next five to 15 years, telling the weapon system, go into this area, fly around or search around underwater and find any ships of this type and attack them for some period of time in space. And that changes the human’s relationship with the use of force a little bit. It doesn’t mean the humans not involved at all, but the humans not quite as involved as they used to be. And is that something we’re comfortable with? And what are the implications of that kind of shift in warfare.

Lucas Perry: So the relevant things here are how this helps to integrate human control and human responsibility and human accountability into autonomous weapons systems. And just hearing you speak about all of that, it also seems like very relevant questions have to do with human psychology, about what human beings are actually likely to be able to do. And then also, I think you articulately put the practical question of whether or not people will be able to react to certain threats given certain situations. So in terms of trying to understand acceptable and unacceptable uses of autonomous weapons, that seems to supervene upon a lot of these facets of benefits and disadvantages of in the loop, on the loop, and out of the loop for different situations and different risks, plus how much we’re willing to automate killing and death and remove human decision making from some of these situations or not.

Paul Scharre: Yes, I mean, I think what’s challenging in this space is that it would be nice, it would be ideal if we could sort of reach agreement among nations for sort of a lethal laws of robotics, and Isaac Asimov’s books about robots you think of these three laws of robotics. Well, those laws aren’t going to work because one of them is not harming a human being and it’s not going to work in the military context, but could there be some agreement among countries for lethal laws of robots that would govern the behavior of autonomous systems in war, and it might sort of say, these are the things that are acceptable or not? Maybe. Maybe that’s possible someday. I think we’re not there yet at least, there are certainly not agreement as widespread disagreement among nations about what approach to take. But the good starting position of trying to understand what are the goals we want to achieve. And I think you’re right that we need to keep the human sort of front and center. But I this this is like a really important asymmetry between humans and machines that’s worth highlighting, which is to say that the laws of war government effects in the battlefield, and then in that sentence, the laws of war, don’t say the human has to pick every target, the laws of war say that the use of force must be executed according to certain principles of distinction and proportionality and other things.

One important asymmetry in the laws of war, however, is that machines are not legal agents. Only humans have legal agents. And so it’s ultimately humans that are responsible for complying with the laws of war. You can’t put a machine on trial for a war crime. It doesn’t make sense. It doesn’t have intentionality. So it’s ultimately a human responsibility to ensure this kind of compliance with the laws of war. It’s a good starting point then for conversation to try to understand if we start from that proposition that it’s a human responsibility to ensure compliance with the laws of war, then what follows from that? What balances that place on human involvement? One of the early parts of the conversations on autonomous weapons internationally came from this very technological based conversation. To say, well, based on the technology, draw these lines, you should put these limits in place. The problem with that approach is not that you can’t do it.

The problem is the state of the technology when? 2014 when discussions on autonomous weapons started at the very beginning of the deep learning revolution, today, in 2020, our estimate of whether technology might be in five years or 10 years or 50 years? The technology moving so quickly than any technologically based set of rules about how we should approach this problem and what is the appropriate use of machines versus human decision making in the use of force. Any technologically based answer is one that we may look back in 10 years or 20 years and say is wrong. We could get it wrong in the sense that we might be leaving valuable technological opportunities on the table and we’re banning technology that if we used it actually might make war more humane and reduce civilian casualties, or we might be permitting technologies that turned out in retrospect to be problematic, and we shouldn’t have done that.

And one of the things we’ve seen historically when you look at attempts to ban weapons is that ones that are technologically based don’t always fare very well over time. So for example, the early bans on poison gas banned the use of poison gas that are launched from artillery shells. It allowed actually poison gas administered via canisters, and so the first use of poison gas in World War One by the Germans was canister based, they actually just laid out little canisters and then open the valves. Now that turns out to be not very practical way of using poison gas in war, because you have someone basically on your side standing over this canister, opening a valve and then getting gassed. And so it’s a little bit tricky, but technically permissible.

One of the things that can be challenging is it’s hard to foresee how the technology is going to evolve. A better approach and one that we’ve seen the dialogue internationally sort of shift towards is our human-centered approach. To start from the position of the human and say, look, if we had all the technology in the world and war, what decisions would we want humans to make and why? Not because the technology cannot make decisions, but because it should not. I think it’s actually a very valuable starting place to understand a conversation, because the technology is moving so quickly.

What role do we want humans to play in warfare, and why do we think this is the case? Are there some tasks in war, or some decisions that we think are fundamentally human that should be decisions that only humans should make and we shouldn’t hand off to machines? I think that’s a really valuable starting position then to try to better interrogate how do we want to use this technology going forward? Because the landscape of technological opportunity is going to keep expanding. And so what do we want to do with this technology? How do we want to use it? And are there ways that we can use this technology that keeps humans in control of the use of force in the battlefield? Keep humans legally and morally and ethically responsible, but may make war more humane in the process, that may make war more precise, that may reduce civilian casualties without losing our humanity in the process.

Lucas Perry: So I guess the thought experiment, there would be like, if we had weapons that let us just delete people instantly without consequences, how would we want human decision making to be integrated with that? Reflecting on that also makes me consider this other point that I think is also important for my considerations around lethal autonomous weapons, which is the necessity of integrating human experience in the consequences of war, the pain and the suffering and the carnage and the PTSD as being almost necessary vehicles to some extent to make us tired of it to integrate how horrible it is. So I guess I would just be interested in integrating that perspective into it not just being about humans making decisions and the decisions being integrated in the execution process, but also about the experiential ramifications of being in relation to what actually happens in war and what violence is like and what happens in violence.

Paul Scharre: Well, I think that we want to unpack a little bit some of the things you’re talking about. Are we talking about ensuring that there is an accurate representation to the people carrying out the violence about what’s happening on the other end, that we’re not sanitizing things. And I think that’s a fair point. When we begin to put more psychological barriers between the person making the decision and the effects, it might be easier for them to carry out larger scale attacks, versus actually making war and more horrible. Now that’s a line of reasoning, I suppose, to say we should make war more horrible, so there’ll be less of it. I’m not sure we might get the outcome that there is less of it. We just might have more horrible war, but that’s a different issue. Those are more difficult questions.

I will say that I often hear philosophers raising things about skin in the game. I rarely hear them being raised by people who have had skin in the game, who have experienced up close in a personal way the horrors of war. And I’m less convinced that there’s a lot of good that comes from the tragedy of war. I think there’s value in us trying to think about how do we make war less terrible? How do we reduce civilian casualties? How do we have less war? But this often comes up in the context of technologies like we should somehow put ourselves at risk. No military does that, no military has ever done that in human history. The whole purpose of militaries getting technology in training is to get an advantage on the adversary. It’s not a fair fight. It’s not supposed to be, it’s not a boxing match. So these are things worth exploring. We need to come from the standpoint of the reality of what war is and not from a philosophical exercise about war might be, but deal with the realities of what actually occurs in the battlefield.

Lucas Perry: So I think that’s a really interesting point. And as someone with a background and interest in philosophy, it’s quite funny. So you do have experience in war, right?

Paul Scharre: Yes, I’ve fought in Iraq and Afghanistan.

Lucas Perry: Then it’s interesting for me, if you see this distinction between people who are actually veterans, who have experienced violence and carnage and tragedies of war, and the perspective here is that PTSD and associated trauma with these kinds of experiences, you find that they’re less salient for decreasing people’s willingness or decision to engage in further war. Is that your claim?

Paul Scharre: I don’t know. No, I don’t know. I don’t know the answer to that. I don’t know. That’s some difficult question for political scientists to figure out about voting preferences of veterans. All I’m saying is that I hear a lot of claims in this space that I think are often not very well interrogated or not very well explored. And there’s a real price that people pay for being involved. Now, people want to say that we’re willing to bear that price for some reason, like okay, but I think we should acknowledge it.

Lucas Perry: Yeah, that make sense. I guess the thing that I was just pointing at was it would be psychologically interesting to know if philosophers are detached from the experience, maybe they don’t actually know about the psychological implications of being involved in horrible war. And if people who are actually veterans disagree with philosophers about the importance of there being skin in the game, if philosophers say that skin in the game reduces willingness to be in war, if the claim is that that wouldn’t actually decrease their willingness to go to war. I think that seems psychologically very important and relevant, because there is this concern about how autonomous weapons and integrating human decision making to lethal autonomous weapons would potentially sanitize war. And so there’s the trade off between the potential mitigating effects of being involved in war, and then also the negative effects which are incurred by veterans who would actually have to be exposed by it and bring the trauma back for communities to have deeper experiential relation with.

Paul Scharre: Yes, and look, we don’t do that, right? We had a whole generation of veterans come back from Vietnam and we as society listen to the stories and understand them and understand, no. I have heard over the years people raise this issue whether it’s drones, autonomous weapons, this issue of having skin in the game either physically being at risk or psychologically. And I’ve rarely heard it raised by people who it’s been them who’s on the line. People often have very gut emotional reactions to this topic. And I think that’s valuable because it’s speaking to something that resonates with people, whether it’s an emotional reaction opposed to autonomous weapons, and that you often get that from many people that go, there’s something about this. It doesn’t feel right. I don’t like this idea. Or people saying, the opposite reaction. Other people that say that “wouldn’t this make war great, it’s more precise and more humane,” and which my reaction is often a little bit like… have you ever interacted with a computer? They break all the time. What are you talking about?

But all of these things I think they’re speaking to instincts that people have about this technology, but it’s worth asking questions to better understand, what is it that we’re reacting to? Is it an assumption about the technologies, is it an assumption about the nature of war? One of the concerns I’ve heard raised is like this will impersonalize war and create more distance between people killing. If you sort of buy that argument, that impersonal war is a bad thing, then you would say the greatest thing would be deeply personal war, like hand to hand combat. It appears to harken back to some glorious age of war when people looked each other in the eye and hacked each other to bits with swords, like real humans. That’s not that that war never occurred in human history. In fact, we’ve had conflicts like that, even in recent memory that involve hand to hand weapons. They tend not to be very humane conflicts. When we see civil violence, when people are murdering each other with machetes or garden tools or other things, it tends to be horrific communal violence, mass atrocities in Rwanda or Cambodia or other places. So I think it’s important to deal with the reality of what war is and not some fantasy.

Lucas Perry: Yes, I think that that makes a lot of sense. It’s really tricky. And the psychology around this I think is difficult and probably not studied enough.

Paul Scharre: There’s real war that occurs in the world, and then there’s the fantasy of war that we, as a society, tell ourselves when we go to movie theaters, and we watch stories about soldiers who are heroes, who conquer the bad guys. We’re told a fantasy, and it’s a fantasy as a society that allows society to perpetuate wars, that allows us to send young men and women off to die. And it’s not to say that there are no circumstances in which a nation might need to go to war to defend itself or its interest, but we sort of dress war up in these pretty clothes, and let’s not confuse that with the reality of what actually occurs. People said, well, through autonomous weapons, then we won’t have people sort of weighing the value of life and death. I mean, it happens sometimes, but it’s not like every time someone dies in war, that there was this thoughtful exercise where a committee sat around and said, “Do we really need to kill this person? Is it really appropriate?” There’s a lot of dehumanization that goes on on the battlefield. So I think this is what makes this issue very challenging. Many of the objections to autonomous weapons are objections to war. That’s what people are actually objecting to.

The question isn’t, is war bad? Of course war’s terrible? The question is sort of, how do we find ways going forward to use technology that may make war more precise and more humane without losing our humanity in the process, and are ways to do that? It’s a challenging question. I think the answer is probably yes, but it’s one that’s going to require a lot of interrogation to try to get there. It’s a difficult issue because it’s also a dynamic process where there’s an interplay between competitors. If we get this wrong, we can easily end up in a situation where there’s less human control, there’s more violence and war. There are lots of opportunities to make things worse as well.

If we could make war perfect, that would be great, in terms of no civilian suffering and reduce the suffering of enemy combatants and the number of lives lost. If we could push a button and make war go away, that would be wonderful. Those things will all be great. The more practical question really is, can we improve upon the status quo and how can we do so in a thoughtful way, or at least not make things worse than today? And I think those are hard enough problems to try to address.

Lucas Perry: I appreciate that you bring a very holistic, well-weighed perspective to the varying sides of this issue. So these are all very big and difficult. Are you aware of people actually studying whether some of these effects exist or not, and whether they would actually sanitize things or not? Or is this basically all just coming down to people’s intuitions and simulations in their head?

Paul Scharre: Some of both. There’s really great scholarship that’s being done on autonomous weapons, certainly there’s a robust array of legal based scholarship, people trying to understand how the law of war might interface with autonomous weapons. But there’s also been worked on by thinking about some of these human psychological interactions, Missy Cummings, who’s at Duke who runs the humans and automation lab down has done some work on human machine interfaces on weapon systems to think through some of these concerns. I think probably less attention paid to the human machine interface dimension of this and the human psychological dimension of it. But there’s been a lot of work done by people like Heather Roth, people at Article 36, and others thinking about concepts of meaningful human control and what might look like in weapon systems.

I think one of the things that’s challenging across the board in this issue is that it is a politically contentious topic. You have kind of levels of this debate going on, you have scholars trying to sort of understand the issue maybe, and then you also have a whole array of politically motivated groups, international organizations, civil society organizations, countries, duking it out basically, at the UN and in the media about where we should go with this technology. As you get a lot of motivated reasoning on all sides about what should the answer be. So for example, one of the things that fascinates me is i’ll often hear people say, autonomous weapons are terrible, and they’ll have a terrible outcome, and we need to ban them now. And if we just pass a treaty and we have enough political will we could ban them. I’ll also hear people say a ban would be pointless, it wouldn’t work. And anyways, wouldn’t autonomous weapons be great? There are other possible beliefs. One could say that a ban is feasible, but the weapons aren’t that big of a deal. So it just seems to me like there’s a lot of politically motivated reasoning that goes on this debate, which makes it very challenging.

Lucas Perry: So one of the concerns around autonomous weapons has to do with accidental escalation of warfare and conflict. Could you explore this point and explain what some strategies might be to prevent accidental escalation of warfare as AI is increasingly being used in the military?

Paul Scharre: Yes, so I think in general, you could bucket maybe concerns about autonomous weapons into two categories. One is a concern that they may not function very well and could have accidents, those accidents could lead to civilian casualties, that could lead to accidental escalation among nations and a crisis, military force forces operating in close proximity to one another and there could be accidents. This happens with people. And you might worry about actions with autonomous systems and maybe one shoots down an enemy aircraft and there’s an escalation and people are killed. And then how do you unwind that? How do you communicate to your adversary? We didn’t mean to do that. We’re sorry. How do you do that in a period of tension? That’s a particular challenge.

There’s a whole other set of challenges that come from the weapons might work. And that might get to some of these deeper questions about the role of humans in decision making about life and death. But this issue of accidental escalation kind of comes into the category of they don’t work very well, then they’re not reliable. And this is the case for a lot of AI and autonomous technology today, which isn’t to say it doesn’t work at all, if it didn’t work at all, it would be much easier. There’d be no debates about bias and facial recognition systems if they never identify faces. There’d be no debates about safety with self-driving cars if the car couldn’t go anywhere. The problem is that a lot of these AI based systems work very well in some settings, and then if the settings change ever so slightly, they don’t work very well at all anymore. And the performance can drop off very dramatically, and they’re not very robust to changes in environmental conditions. So this is a huge problem for the military, because in particular, the military doesn’t get to test its systems in its actual operating environment.

So you can take a car, and you can take it on the roads, and you can test it in an actual driving environment. And we’ve seen car companies rack up 10 million miles or more of driving data. And then they can go back and they can run simulations. So Waymo has said that they run 10 million miles of simulated driving every single day. And they can simulate in different lighting conditions, in different environmental conditions. Well, the military can build simulations too, but simulations of what? What will the next war look like? Well we don’t know because we haven’t fought it yet. The good news is that war’s very rare, which is great. But that also means that for these kinds of systems, we don’t necessarily know the operating conditions that they’ll be in, and so there is this real problem of this risk of accidents. And it’s exacerbated in the fact that this is also a very adversarial environment. So you actually have an enemy who’s trying to trick your system and manipulate it. That’s adds another layer of complications.

Driving is a little bit competitive, maybe somebody doesn’t want to let you into the lane, but the pedestrians aren’t generally trying to get hit by cars. That’s a whole other complication in the military space. So all of that leads to concerns that the systems may do okay in training, and then we take them out in the real world, and they fail and they fail a pretty bad way. If it’s a weapon system that is making its own decisions about whom to kill, it could be that it fails in a benign way, then it targets nothing. And that’s a problem for the military who built it, or fails in a more hazardous way, in a dangerous way and attacks the wrong targets. And when we’re talking about an autonomous weapon, the essence of this autonomous weapon is making its own decisions about which targets to attack and then carrying out those attacks. If you get that wrong, those could be pretty significant consequences with that. One of those things could be civilian harm. And that’s a major concern. There are processes in place for printing that operationally and test and evaluation, are those sufficient? I think they’re good reasons to say that maybe they’re not sufficient or not completely sufficient, and they need to be revised or improved.

And I’ll point out, we can come back to this that the US Defense Department actually has a more stringent procedure in place for reviewing autonomous weapons more than other weapons, beyond what the laws of war have, the US is one of the few countries that has this. But then there’s also question about accidental escalation, which also could be the case. Would that lead to like an entire war? Probably not. But it could make things a lot harder to defuse tensions in a crisis, and that could be problematic. So we just had an incident not too long ago, where the United States carried out an attack against the very senior Iranian General, General Soleimani, who’s the head of the Iranian Quds Force and killed him in a drone strike. And that was an intentional decision made by a person somewhere in the US government.

Now, did they fully think that through? I don’t know, that’s a different question. But a human made that decision in any case. Well, that’s a huge escalation of hostilities between the US and Iraq. And there was a lot of uncertainty afterwards about what would happen and Iran launched some ballistic missiles against US troops in Iraq. And whether that’s it, or there’s more retaliation to come, I think we’ll see. But it could be a much more challenging situation, if you had a situation in the future where an autonomous weapon malfunctioned and took some action. And now the other side might feel compelled to respond. They might say, well, we have to, we can’t let this go. Because humans emotions are on the line and national pride and prestige, and they feel like they need to maintain a principle of deterrence and they need to retaliate it. So these could all be very complicated things if you had an accident with an autonomous weapon.

Lucas Perry: Right. And so an adjacent issue that I’d like to explore now is how a potential arms race can have interplay with issues around accidental escalation of conflict. So is there already an arms race brewing for autonomous weapons? If so, why and what could potentially be done to deescalate such a situation?

Paul Scharre: If there’s an arms race, it’s a very strange one because no one is building the weapons. We see militaries advancing in robotics and autonomy, but we don’t really see sort of this rush to build autonomous weapons. I struggle to point to any programs that I’m aware of in militaries around the globe that are clearly oriented to build fully autonomous weapons. I think there are lots of places where much like these incremental advancements of autonomy in cars, you can see more autonomous features in military vehicles and drones and robotic systems and missiles. They’re adding more autonomy. And one might be violently concerned about where that’s going. But it’s just simply not the case that militaries have declared their intention. We’re going to build autonomous weapons, and here they are, and here’s our program to build them. I would struggle to use the term arms race. It could happen, maybe worth a starting line of an arms race. But I don’t think we’re in one today by any means.

It’s worth also asking, when we say arms race, what do we mean and why do we care? This is again, one of these terms, it’s often thrown around. You’ll hear about this, the concept of autonomous weapons or AI, people say we shouldn’t have an arms race. Okay. Why? Why is an arms race a bad thing? Militaries normally invest in new technologies to improve their national defense. That’s a normal activity. So if you say arms race, what do you mean by that? Is it beyond normal activity? And why would that be problematic? In the political science world, the specific definitions vary, but generally, an arms race is viewed as an increase in defense spending overall, or in a particular technology area above normal levels of modernizing militaries. Now, usually, this is problematic for a couple of reasons. One could be that it ends up just in a massive national expenditure, like during the case of the Cold War, nuclear weapons, that doesn’t really yield any military value or increase anyone’s defense or security, it just ends up net flushing a lot of money down the drain. That’s money that could be spent elsewhere for pre K education or healthcare or something else that might be societally beneficial instead of building all of these weapons. So that’s one concern.

Another one might be that we end up in a world that the large number of these weapons or the type of their weapons makes it worse off. Are we really better off in a world where there are 10s of thousands of nuclear weapons on hair-trigger versus a few thousand weapons or a few hundred weapons? Well, if we ever have zero, all things being equal, probably fewer nuclear weapons is better than more of them. So that’s another kind of concern whether in terms of violence and destructiveness of war, if a war breakout or the likelihood of war and the stability of war. This is an A in an area where certainly we’re not in any way from a spending standpoint, in an arms race for autonomous weapons or AI today, when you look at actual expenditures, they’re a small fraction of what militaries are spending on, if you look at, say AI or autonomous features at large.

And again for autonomous weapons, there really aren’t at least openly declared programs to say go build a fully autonomous weapon today. But even if that were the case, why is that bad? Why would a world where militaries are racing to build lots of atomic weapons be a bad thing? I think it would be a bad thing, but I think it’s also worth just answering that question, because it’s not obvious to everyone. This is something that’s often missing in a lot of these debates and dialogues about autonomous weapons, people may not share some of the underlying assumptions. It’s better to bring out these assumptions and explain, I think this would be bad for these reasons, because maybe it’s not intuitive to other people that they don’t share those reasons and articulating them could increase understanding.

For example, the FLI letter on autonomous weapons from a few years ago said, “the key question for humanity today is whether to start a global AI arms race or prevent it from starting. If any major military power pushes ahead with AI weapon development, the global arms race is virtually inevitable. And the endpoint of this technological trajectory is obvious. Autonomous weapons will become the Kalashnikovs of tomorrow.” I like the language, it’s very literary, “the Kalashnikovs of tomorrow.” Like it’s a very concrete image. But there’s a whole bunch of assumptions packed into those few sentences that maybe don’t work in the letter that’s intended to like sort of galvanize public interest and attention, but are worth really unpacking. What do we mean when we say autonomous weapons are the Kalashnikovs of tomorrow and why is that bad? And what does that mean? Those are, I think, important things to draw out and better understand.

It’s particularly hard for this issue because the weapons don’t exist yet. And so it’s not actually like debates around something like landlines. We could point to the mines and say like “this is a landmine, we all agree this is a landmine. This is what it’s doing to people.” And everyone could agree on what the harm is being caused. The people might disagree on what to do about it, but there’s agreement on what the weapon is and what the effect is. But for autonomous weapons, all these things are up to debate. Even the term itself is not clearly defined. And when I hear people describe it, people can be describing a whole range of things. Some people when they say the word autonomous weapon, they’re envisioning a Roomba with a gun on it. And other people are envisioning the Terminator. Now, both of those things are probably bad ideas, but for very different reasons. And that is important to draw out in these conversations. When you say autonomous weapon, what do you mean? What are you envisioning? What are you worried about? Worried about certain types of scenarios or certain types of effects?

If we want to get to the place where we really as a society come together and grapple with this challenge, I think first and foremost, a better communication is needed and people may still disagree, but it’s much more helpful. Stuart Russell from Berkeley has talked a lot about dangers of small anti-personnel autonomous weapons that would widely be the proliferated. He made the Slaughterbots video that’s been seen millions of times on YouTube. That’s a very specific image. It’s an image that’s very concrete. So then you can say, when Stuart Russell is worried about autonomous weapons, this is what he’s worried about. And then you can start to try to better understand the assumptions that go into that.

Now, I don’t share Stuart’s concerns, and we’ve written about it and talked about before, but it’s not actually because we disagree about the technology, I would agree that that’s very doable with existing technology. We disagree about the social responses to that technology, and how people respond, and what are the countermeasures and what are ways to prevent proliferation. So we, I think, disagree on some of the political or social factors that surround kind of how people approach this technology and use it. Sometimes people actually totally agree on the risks and even maybe the potential futures, they just have different values. And there might be some people who their primary value is trying to have fewer weapons in the world. Now that’s a noble goal. And they’re like, hey, anyway that we can have fewer weapons, fewer advanced technologies, that’s better. That’s very different from someone who’s coming from a position of saying, my goal is to improve my own nation’s defense. That’s a totally different value system. A total different preference. And they might be like, I also value what you say, but I don’t value it as much. And I’m going to take actions that advance these preferences. It’s important to really sort of try to better draw them out and understand them in this debate, if we’re going to get to a place where we can, as a society come up with some helpful solutions to this problem.

Lucas Perry: Wonderful. I’m totally on board with that. Two questions and confusions on my end. The first is, I feel a bit confused when you say these weapons don’t exist already. It seems to me more like autonomy exists on a spectrum and is the integration of many different technologies and decision making in systems. It seems to me there is already a certain degree of autonomy, there isn’t Terminator level autonomy, or specify an objective and the autonomous system can just basically go execute that, that seems to require very high level of generality, but there seems to already exist a level of autonomy today.

And so in that video, Stuart says that slaughterbots in particular represent a miniaturization and integration of many technologies, which already exist today. And the second thing that I’m confused about is when you say that it’s unclear to you that militaries are very interested in this or that there currently is an arms race. It seems like yes, there isn’t an arms race, like there was with nuclear weapons where it’s very clear, and they’re like Manhattan projects around this kind of technology, but given the strategic advantage conferred by this technology now and likely soon, it seems to me like game theoretically, from the position of militaries around the world that have the capacity to invest in these things, that it is inevitable given their battlefield importance that there would be massive ramping up or investments, or that there already is great interest in developing the autonomy and the subtechnologies required for developing fully autonomous systems.

Paul Scharre: Those are great questions and right on point. And I think the central issues in both of your questions are when we say these weapons or when I say these things, I should be more precise. When we say autonomous weapons, what do we mean exactly? And this is one of the things that can be tricky in this space, because there are not these universally agreed upon definitions. There are certainly many weapons systems used widely around the globe today that incorporate some autonomous features. Many of these are fire and forget weapons. When someone launches them, they’re not coming back. They have in that sense, autonomy to carry out their mission. But autonomy is relatively limited and narrowly bounded, and humans, for the most part are choosing the targets. So you can think of kind of maybe these three classes of weapons, these semi autonomous weapons, where humans are choosing the targets, but there’s lots of autonomy surrounding that decision, queuing information to people, flying the munition once the person launches it. That’s one type of weapon, widely used today by really every advanced military.

Another one is the supervised autonomous weapons that are used in these relatively limited settings for defensive purposes, where there is kind of this automatic mode that people can turn them on and activate them to defend the ship or the ground base or the vehicle. And these are really needed for these situations where the incoming threats are too fast for humans to respond. And these again are widely used around the globe and have been in place for decades. And then there are what we could call fully autonomous weapons, where the human’s launching them and human programs in the parameters, but they have some freedom to fly a search pattern over some area and then once they find a target, attack it on their own. For the most part, with some exceptions, those weapons are not widely used today. There have been some experimental systems that have been designed. There have been some put into operation in the past. The Israeli harpy drone is an example of this that is still in operation today. It’s been around since the ’90s, so it’s not really very new. And it’s been sold to a handful of countries, India, Turkey, South Korea, China, and the Chinese have reportedly reverse engineered their own version of this.

But it’s not like when widespread. So it’s not like a major component of militaries order of that. I think you see militaries investing in robotic systems, but the bulk of their fleets are still human occupied platforms, robotics are largely an adjunct to them. And in terms of spending, while there is increased spending on robotics, most of the spending is still going towards more traditional military platforms. The same is also true about the degree of autonomy, most of these robotic systems are just remote controlled, and they have very limited autonomy today. Now we’re seeing more autonomy over time in both robotic vehicles and in missiles. But militaries have a strong incentive to keep humans involved.

It is absolutely the case that militaries want technologies that will give them an advantage on the battlefield. But part of achieving an advantage means your systems work, they do what you want them to do, the enemy doesn’t hack them and take them over, you have control over them. All of those things point to more human control. So I think that’s the thing where you actually see militaries trying to figure out where’s the right place on the spectrum of autonomy? How much autonomy is right, and that line is going to shift over time. But it’s not the case that they necessarily want just full autonomy because what does that mean, then they do want weapon systems to sort of operate under some degree of human direction and involvement. It’s just that what that looks like may evolve over time as the technology advances.

And there are also, I should add, other bureaucratic factors that come into play that militaries investments are not entirely strategic. There’s bureaucratic politics within organizations. There’s politics more broadly with the domestic defense industry interfacing with the political system in that country. They might drive resources in certain directions. There’s some degree of inertia of course in any system that are also factors in play.

Lucas Perry: So I want to hit here a little bit on longer term perspectives. So the Future of Life Institute in particular is interested in mitigating existential risks. We’re interested in the advanced risks from powerful AI technologies where AI not aligned with human values and goals and preferences and intentions can potentially lead us to suboptimal equilibria that were trapped in permanently or could lead to human extinction. And so other technologies we care about are nuclear weapons and synthetic-bio enabled by AI technologies, etc. So there is this view here that if we cannot establish a governance mechanism as a global community on the concept that we should not let AI make the decision to kill then how can we deal with more subtle near term issues and eventual long term safety issues around the powerful AI technologies? So there’s this view of ensuring beneficial outcomes around lethal autonomous weapons or at least beneficial regulation or development of that technology, and the necessity of that for longer term AI risk and value alignment with AI systems as they become increasingly intelligent. I’m curious to know if you have a view or perspective on this.

Paul Scharre: This is the fun part of the podcast with the Future of Life because this rarely comes up in a lot of the conversations because I think in a lot of the debates, people are focused on just much more near term issues surrounding autonomous weapons or AI. I think that if you’re inclined to see that there are longer term risks for more advanced developments in AI, then I think it’s very logical to say that there’s some value in humanity coming together to come up with some set of rules about autonomous weapons today, even if the specific rules don’t really matter that much, because the level of risk is maybe not as significant, but the process of coming together and agreeing on some set of norms and limits on particularly military applications in AI is probably beneficial and may begin to create the foundations for future cooperation. The stakes for autonomous weapons might be big, but are certainly not existential. I think in any reasonable interpretation of autonomous weapons might do really, unless you start thinking about autonomy wired into, like nuclear launch decisions which is basically nuts. And I don’t think it’s really what’s on the table for realistically what people might be worried about.

When we try to come together as a human society to grapple with problems, we’re basically forced to deal with the institutions that we have in place. So for example, for autonomous weapons, we’re having debates in the UN Convention on Certain Conventional Weapons to CCW. Is that the best form for talking about autonomous weapons? Well, it’s kind of the form that exists for this kind of problem set. It’s not bad. It’s not perfect in some respects, but it’s the one that exists. And so if you’re worried about future AI risk, creating the institutional muscle memory among the relevant actors in society, whether it’s nation states, AI scientists, members of civil society, militaries, if you’re worried about military applications, whoever it is, to come together, to have these conversations, and to come up with some answer, and maybe set some agreements, some limits is probably really valuable actually because it begins to establish the right human networks for collaboration and cooperation, because it’s ultimately people, it’s people who know each other.

So oh, “I worked with this person on this last thing.” If you look at, for example, the international movement that The Campaign to Stop Killer Robots is spearheading, that institution or framework, those people, those relationships are born out of past successful efforts to ban landmines and then cluster munitions. So there’s a path dependency, and human relationships and bureaucracies, institutions that really matters. Coming together and reaching any kind of agreement, actually, to set some kind of limits is probably really vital to start exercising those muscles today.

Lucas Perry: All right, wonderful. And a final fun FLI question for you. What are your views on long term AI safety considerations? Do you view AI eventually as an existential risk and do you integrate that into your decision making and thinking around the integration of AI and military technology?

Paul Scharre: Yes, it’s a great question. It’s not something that comes up a lot in the world that I live in, in Washington in the policy world, people don’t tend to think about that kind of risk. I think it’s a concern. It’s a hard problem because we don’t really know how the technology is evolving. And I think that one of the things is challenging with AI is our frame for future more advanced AI. Often the default frame is sort of thinking about human like intelligence. When people talk about future AI, people talk about terms like AGI, or high level machine intelligence or human like intelligence, we don’t really know how the technology is evolving.

I think one of the things that we’re seeing with AI machine learning that’s quite interesting is that it often is evolving in ways that are very different from human intelligence, in fact, very quite alien and quite unusual. And I’m not the first person to say this, but I think that this is valid that we are, I think, on the verge of a Copernican revolution in how we think about intelligence, that rather than thinking of human intelligence as the center of the universe, that we’re realizing that humans are simply one type of intelligence among a whole vast array and space of possible forms of intelligence, and we’re creating different kinds, they may have very different intelligence profiles, they may just look very different, they may be much smarter than humans in some ways and dumber in other ways. I don’t know where things are going. I think it’s entirely possible that we move forward into a future where we see many more forms of advanced intelligent systems. And because they don’t have the same intelligence profile as human beings, we continue to kick the can down the road into being true intelligence because it doesn’t look like us. It doesn’t think like us. It thinks differently. But these systems may yet be very powerful in very interesting ways.

We’ve already seen lots of AI systems, even very simple ones exhibit a lot of creativity, a lot of interesting and surprising behavior. And as we begin to see the sort of scope of their intelligence widen over time, I think there are going to be risks that come with that. They may not be the risks that we were expecting, but I think over time, there going to be significant risks, and in some ways that our anthropocentric view is, I think, a real hindrance here. And I think it may lead us to then underestimate risk from things that don’t look quite like humans, and maybe miss some things that are very real. I’m not at all worried about some AI system one day becoming self aware, and having human level sentience, that does not keep me up at night. I am deeply concerned about advanced forms of malware. We’re not there today yet. But you could envision things over time that are adapting and learning and begin to populate the web, like there are people doing interesting ways of thinking about systems that have misaligned goals. It’s also possible to envision systems that don’t have any human directed goals at all. Viruses don’t. They replicate. They’re effective at replicating, but they don’t necessarily have a goal in the way that we think of it other than self replication.

If you have systems that are capable of replicating, of accumulating resources, of adapting, over time, you might have all of the right boxes to check to begin to have systems that could be problematic. They could accumulate resources that could cause problems. Even if they’re not trying to pursue either a goal that’s misaligned with human interest or even any goal that we might recognize. They simply could get out in the wild, if they’re effective at replication and acquiring resources and adapting, then they might survive. I think we’re likely to be surprised and continue to be surprised by how AI systems evolve, and where that might take us. And it might surprise us in ways that are humbling for how we think about human intelligence. So one question I guess is, is human intelligence a convergence point for more intelligent systems? As AI systems become more advanced, and they become more human like, or less human like and more alien.

Lucas Perry: Unless we train them very specifically on human preference hierarchies and structures.

Paul Scharre: Right. Exactly. Right. And so I’m not actually worried about a system that has the intelligence profile of humans, when you think about capacity in different tasks.

Lucas Perry: I see what you mean. You’re not worried about an anthropomorphic AI, you’re worried about a very powerful, intelligent, capable AI, that is alien and that we don’t understand.

Paul Scharre: Right. They might have cross domain functionality, it might have the ability to do continuous learning. It might be adaptive in some interesting ways. I mean, one of the interesting things we’ve seen about the field of AI is that people are able to tackle a whole variety of problems with some very simple methods and algorithms. And this seems for some reason offensive to some people in the AI community, I don’t know why, but people have been able to use some relatively simple methods, with just huge amounts of data and compute, it’s like a variety of different kinds of problems, some of which seem very complex.

Now, they’re simple compared to the real world, when you look at things like strategy games like StarCraft and Dota 2, like the world looks way more complex, but these are still really complicated kind of problems. And systems are basically able to learn totally on their own. That’s not general intelligence, but it starts to point towards the capacity to have systems that are capable of learning a whole variety of different tasks. They can’t do this today, continuously without suffering the problem of catastrophic forgetting that people are working on these things as well. The problems today are the systems aren’t very robust. They don’t handle perturbations in the environment very well. People are working on these things. I think it’s really hard to see how this evolves. But yes, in general, I think that our fixation on human intelligence as the pinnacle of intelligence, or even the goal of what we’re trying to build, and the sort of this anthropocentric view is, I think, probably one that’s likely to lead us to maybe underestimate some kinds of risks.

Lucas Perry: I think those are excellent points and I hope that mindfulness about that is able to proliferate in government and in actors who have power to help mitigate some of these future and short term AI risks. I really appreciate your perspective and I think you bring a wholesomeness and a deep authentic entertaining of all the different positions and arguments here on the question of autonomous weapons and I find that valuable. So thank you so much for your time and for helping to share information about autonomous weapons with us.

Paul Scharre: Thank you and thanks everyone for listening. Take care.

0 comments

Comments sorted by top scores.