Army of None: Autonomous Weapons and the Future of War, with Paul Scharre

May 8, 2018

"What happens when a predator drone has as much as autonomy as a self-driving car, moving to something that is able to do all of the combat functions all by itself, that it can go out, find the enemy, and attack the enemy without asking for permission?" asks military and technology expert Paul Scharre. The technology's not there yet, but it will be very soon, raising a host of ethical, legal, military, and security challenges.

To see the slides and images that Paul Scharre references, please watch the full event video of this talk.

JOANNE MYERS: Good evening, everyone. I'm Joanne Myers, and on behalf of the Carnegie Council I would like to thank you for joining us for this Public Affairs program. Our speaker is Paul Scharre, and he will be discussing Army of None: Autonomous Weapons and the Future of War.

There is no question but his experiences as a former U.S. Army Ranger with tours in Iraq and Afghanistan have not only informed his writings but have influenced his ideas in creating the U.S. military guidelines on autonomous weapons. He is now heading a program at the Center for a New American Security (CNAS) that is focused on technology and security. We are delighted to host him this evening so that he can share his thinking on this topic with us.

What will future wars look like? As technology and artificial intelligence (AI) advance quickly, moving from the realm of science fiction to designer drawing boards to engineering laboratories and to the battlefield, the possibility that machines could independently select and engage targets without human intervention is fast approaching. These new machines or autonomous weapons that are often referred to as "killer robots" could operate on land, in the air, or at sea, and are threatening to revolutionize armed conflict in alarming ways.

This raises the question of whether future wars will be fought with enemy combatants who wear no uniform, defend no territory, protect no population, and feel no pity, no remorse, and no fear. These new weapons have prompted a debate among military planners, roboticists, human rights activists, legal scholars, and ethicists, all of whom are wrestling with the fundamental question, which basically boils down to whether machines should be allowed to make life-and-death decisions outside of human control.

In Army of None our speaker provides a cutting-edge vision of autonomous weapons and their potential role in future warfare. Drawing on his own experiences and interviews with engineers, scientists, military officers, Department of Defense (DoD) officials, lawyers, and human rights advocates, Paul skillfully takes us on a journey through the rapidly evolving world of next-generation robotic weapons while raising the many ethical, legal, military, and security challenges surrounding these new machines.

The stakes are high. Artificial intelligence is emerging as a powerful technology. It is coming, and it will be used in war. The real question is: What will we do with this technology? Do we use it to make warfare more humane and precise? Can we do so without losing our humanity in the process? In the end, do we control our creations, or do they control us?

For a glimpse into this new world and the challenges that advanced artificial intelligence systems will bring, please join me in welcoming our guest tonight, Paul Scharre. Thank you for joining us.

PAUL SCHARRE: Thank you so much, Joanne, for that introduction.

I'm Paul Scharre. I work at a think tank in Washington, D.C., called the Center for a New American Security. It's a national security-focused think tank, and I run our program on technology and national security.

The program I run is focused on how different technologies are going to change U.S. security and what the United States needs to do to take advantage of those technologies and prepare for what others might do. We work on a range of things from Twitter bots and Russian disinformation to emerging weapons technologies or human-enhancement technologies. What I'll talk about today is one aspect of this that I've been working on for about 10 years now, when I was at the Pentagon, working in the office of the secretary of defense as a policy analyst there and since then working at the Center for a New American Security.

As Joanne mentioned, I have a book out, Army of None. I want to walk you through some of the highlights of this technology, how it's evolving, what people are doing around the globe, and then some of the legal and ethical or other issues that come up. Given this venue, I will try to emphasize in particular some of these ethical concerns that people raise as this new technology is developing.

We are in a world today where robotic systems or drones are widely used by militaries around the globe—there are at least 90 countries that have drones today—and many non-state actors. The Islamic State of Iraq and Syria (ISIS) and others already have access to drones. Sixteen countries and counting have armed drones, and you can see on the map where the majority of them are coming from. They are actually coming from China as they spread around the globe, and again, non-state groups have these as well. The Islamic State has built crude homemade armed drones that they are using in Iraq and Syria for attacks. They are using them as improvised explosive devices coming from the air to attack U.S. troops and others.

With each generation, these robotic systems are becoming increasingly autonomous, increasingly sophisticated. In many ways, they are like automobiles. We are seeing with each generation of cars more features being added into them like self parking, automated cruise control, automatic lane keeping, features that begin to do more automation all on their own inside the vehicle. We're seeing the same thing in robotic systems. With each generation these become increasingly advanced and increasingly autonomous, doing more tasks all on their own.

The question that the book grapples with is: What happens when a predator drone has as much as autonomy as a self-driving car, moving to something that is able to do all of the combat functions all by itself, that it can go out, find the enemy, and attack the enemy without asking for permission?

We're not there yet. Right now people are still in control, but the technology is going to take us there. It is being developed not just in military labs but in commercial labs all around the world, and this technology is very dual-use. The same technology that will allow self-driving cars and will save lives on the road may also be used in warfare.

We are seeing this develop not just in the air but in other domains as well. This is a picture where you can see a completely uninhabited boat. There is no one on board, it has been driven autonomously. This one is developed by the United States Office of Naval Research and was done in experiments of swarming boats on the James River in Virginia where multiple boats together would be cooperating their behavior, working as a team all autonomously on their own. When the Navy talked about this they pointedly said, "These boats in the future will be armed." People, they said, will still be in control of the weapons for now, but it's not clear where this is going in the long term.

This is the Israeli Guardium. It is an uninhabited, fully robotic ground vehicle that has been put on patrol on the Gaza border by Israel. It is again armed. You don't see the machine gun here in the picture, but they have one on it. Again, the Israelis have said that the vehicle itself may be autonomous, but a human is going to be in charge of the weapon and controlling whether it is going to fire.

Not all countries might see this the same way. This is as you see a much larger system, it looks like a miniature tank. This is a Russian ground combat vehicle called the Uran-9. It's equipped with a heavy machine gun and rockets. The rockets actually are on an extendable arm, so it can hide behind a hillside and reach up and fire on NATO tanks.

The Russians have said that they envision in the future fully roboticized units will be created that are capable of independent operations. What exactly that means is not entirely clear, but it is clearly a much more aggressive vision for robots in future warfare.

We're also seeing a number of leading military countries develop stealth drones that would be intended to operate in contested areas. Drones today are largely used to track insurgents or terrorists in places where the insurgents can't shoot the drones down.

However, drones like this one—this is the X-47B; it's an experimental drone the U.S. Navy built several years ago and has now been retired, but it was designed to push the boundaries forward on autonomy. Here it is landing on an aircraft carrier fully autonomously. It was the first vehicle to do so. It could take off and land on aircraft carriers on its own. It was the first aircraft to do autonomous aerial refueling, which is an important combat function to extend its range into contested areas. You can see its sleek design. It is intended to have the profile of a stealth aircraft.

This is not an operational aircraft, it doesn't carry weapons on board—it has actually been retired now—but this and other systems being built by Russia, China, Israel, the United Kingdom, and France are all designed to be a bridge toward future combat aircraft that would operate in very contested areas.

The challenge to this idea is that in these contested environments adversaries are going to be much more equal, and they might have the ability to do things like not just target things with radars and shoot them down but also jam their communications links, which is really the fragile part of these systems. Right now they depend on humans. Well, the humans aren't on board, they're flying it remotely from somewhere else. If you can jam their communications links, you cut the human out of the system. We're already seeing this today. The United States has acknowledged that Russia has been jamming the communication links for U.S. drones in Syria. For today's systems that don't have a lot of autonomy this might inhibit their operation.

The challenge going forward with more future combat aircraft would be that if you have this aircraft forward in this environment and the enemy has jammed its communications, what do you want it to do? Does it come home? Does it take pictures? Does it do surveillance operations but doesn't drop any weapons? Would it be allowed to, say, attack pre-planned targets?

That's kind of what a cruise missile does today: A human programs in the coordinates of the targets, and the missile goes and does that. What if it came across what the military calls "targets of opportunity," something that would be new?

A lot of really valuable military targets are now mobile, things like mobile missile launchers in North Korea. You can't know in advance where they're going to be—that's why they're mobile—so North Korea can hide them. That's a high-value target. If there was a war, we would be very concerned about finding those missiles and making sure that North Korea doesn't launch them. Would you permit some robotic vehicle like this to make those decisions on its own? That is the kind of concern that militaries are going to have to grapple with as this technology matures. They are going to have write something in the code for what the vehicle can do when its communications are lost.

Another issue is self-defense. This is called the Sea Hunter. It is a U.S. Navy ship that is intended to be a robotic ship. You can see in the picture that there are people standing on it, but it can operate fully autonomously with no one on board. It's designed to track enemy submarines. It's designed to follow them like a persistent tail on them wherever they go.

One of the challenges here is if you had this out on the water and the enemy comes up and they try to board it, what are you going to do? There's no one on board to fight them off. Would you let them just take it?

Not too long ago, China actually did this. They snatched a U.S. underwater drone in the South China Sea. They just took it. It was just sitting there, and they took it. Afterward, the United States protested and said, "Give us our drone back." The Chinese said: "Oh, we're sorry. We didn't mean to. That was some low-level person doing that. That was not authorized," and they gave it back.

How are you going to treat this? Are you going to let somebody—it's a $20 million ship—take that, or are you going to give it the ability to defend itself on its own? Again, if you have communications with it, a human could make those calls. But if you don't, if they jam its communications—again, these are very real problems militaries will have to come up with some answer for.

We're also seeing increased autonomy not just in vehicles but also advanced missiles. This is an image from the U.S. Long Range Anti-Ship Missile (LRASM). It's not a photograph, it's a diagram of it. One of the things you see it doing here is that it is doing autonomous routing.

The target is chosen by a person. A person using, for example, maybe satellite imagery will say, "There's an enemy ship here, I want to attack this enemy ship," and launches the missile. But then on its way if the missile runs into a pop-up threat—that's what the red bubble is meant to convey: "Don't go into this area. There is enemy there"—the missile on its own can decide to route around that. That is one of the ways that we're seeing more autonomy creep into these systems over time.

What this begs the question of is: What about some future missile that might be setting its own targets? Or the human might say, "Well, just go out there and look for the enemy and see what you can find." Again, we're not there yet, but the technology will make it possible.

We've already had for decades in militaries around the globe missiles and torpedoes that have a lot of autonomy on their own. This is a high-speed, anti-radiation missile. It is a missile that is used to target enemy radars. Here it is on a Navy F-18. One of the ways it works is the pilot will know that there is an enemy radar signature, launches the missile, and the missile has a sensor on board that can sense the radar all on its own, and then it can maneuver toward the radar to make sure that it hits it, doesn't miss it. In that case, it is much smarter, much more intelligent than, say, a dumb, unguided bomb.

Weapons of this type have been around for 70 years. The first origin of this was a German torpedo in World War II that could listen to the sound of allied ships' propellers and then zero toward them. These are widely used. Many of them are what the military calls "fire-and-forget weapons." Once it's let go, it's not coming back. The human in that case will have no control over the weapon. But the autonomy or the freedom of the weapons is bounded very tightly, so it's not given a lot of freedom to, say, go over a wide area. You might think of it like a police attack dog sent to go chase down a suspect who is running. It's not the same as, for example, a stray dog roaming the streets deciding to attack whoever on its own.

That might change going forward. In these pictures here you see in the blue what I will call a semi-autonomous weapon, one where the human decides the target. The human says, "Well, there are some tanks here" or something else, and lets this weapon go to attack that. The freedom—which you can imagine is represented by the circle there—the weapon has is very tightly bounded in space and time and what it's allowed to do.

The decision is about what happens when we get to something in the red, the fully autonomous weapon, that can look over a wide area. One example of this in use today is a drone called the Israeli Harpy drone. Unlike the Harop, the anti-radiation missile that is designed to target enemy radars, the Harpy similarly goes after radars, but it can search over hundreds of kilometers and can loiter over a big area for up to two-and-a-half hours.

So now the human doesn't need to know where the radar is. You could just say, "Well, I think it's likely that there will be enemy radars in this area," and launch these Harpies, and it will find them all on its own. This fundamentally changes the human's relationship with what's happening in war. The human doesn't need to know the particulars of that situation. The human could say, "Well, I'm just going to let the robot figure that out."

There are some examples of things like this in operation. This is a Phalanx gun from a U.S. Navy ship. There is a version of this that has also been put on land called the Army's Counter Rocket, Artillery, and Mortar System (C-RAM). It is somewhat affectionately referred to by some of the troops as R2-D2 because of the dome aspect of it. It is used to defend ships and land bases against incoming missiles.

The counter to all this autonomy and missiles and these smart-homing missiles is more autonomy on the other side. What these systems do is—most of the time humans are in the loop, humans are making a decision about what to do. But because the speed of attacks could overwhelm humans—you could have so many missiles coming in that humans can't possibly cope—many of these systems have "full-auto" modes. There is a mode you can switch it to where—you don't steam around with a ship on this in peacetime, but in wartime you could turn it to this mode and anything that's coming in that is a threat, this gun will go shoot all on its own if it meets certain characteristics, certain altitudes, certain speeds, and radar signature.

There are other examples. This is the Army's Patriot air-missile defense system, deployed here to the Turkish border and in Turkey along the Syrian border—this again has an automatic mode—and the Navy's Aegis Combat System on a ship, again seen here launching a missile. These types of systems are in use by at least 30 countries around the globe today, so in widespread use, and they've been around for decades.

This is an image of the Israeli Harpy. The Harpy has been transferred to a handful of countries—Turkey, China, India, and South Korea—and China has reportedly reverse-engineered their own version of this.

We have not yet seen wider proliferation of these weapons, but we have seen some historical examples. In the 1980s the U.S. Navy had a missile called the Tomahawk Anti-Ship Missile (TASM) which basically did this: It launched over the horizon to go after Soviet ships. You can see it here in this diagram. It was intended to go out and fly a search pattern to look for Soviet ships.

This was taken out of service in the 1990s. One of the questions that I explore in the book is if that these weapons of this type have been around for decades, why aren't they in more widespread use, which they're really not by militaries? I talked to people in the Navy who are familiar with this. One of the things they said was that for this weapon system at that time, one of the challenges was if you're launching it over the horizon, and it can hunt for ships all on its own, if you don't know where the ships are, why are you launching it? It's a missile, it's not coming back. It's expensive. The ship doesn't have that many of them, they have a limited number.

There was this problem of, when am I using this? What's the situation? We haven't seen a lot of these, and there were some experimental programs in the 1990s that were canceled.

But drones begin to change this dynamic because drones are now recoverable, and you can launch the drone to go over the horizon and look for the ships, and if it doesn't find any, that's okay. You still get it back. So something we may see going forward is more of these systems in use as we see more advanced drones being used by militaries around the globe.

We're seeing in a variety of places more autonomy. These are some photos from an experiment I saw out at the Naval Postgraduate School in Monterey, California. Here they're building these small Styrofoam drones.

The physical aspect of the drone is not very complicated. It is a very simple kind of thing that a reasonable—frankly there is so much robotics being done in schools that high school students, maybe even junior high school students, could do on their own now. What's really powerful about these types of systems is the brains of them, the autonomy. What they're doing is building swarms of drones that can operate collectively.

Here they're working on 10 drones operating as an entire swarm, and they're building experiments—one of the ones I witnessed—of swarms fighting other swarms. In the air, 10 versus 10, these swarms are fighting each other, and they're trying to figure out the tactics of this kind of warfare.

We're also seeing this in cybersystems. This is an image of one variant of Stuxnet as it spread across the Internet and across various computer networks in Iran and elsewhere. This is again a very autonomous cybersystem. Stuxnet could go out on its own and spread across networks and then when it found its target deploy its payload entirely by itself.

We're seeing more advanced autonomy in cybersystems. This is an image of Mayhem. It was the winning computer from the Defense Advanced Research Projects Agency's (DARPA's) Cyber Grand Challenge a couple of years ago.

What they did is they put different teams together to do an automated, what they call a "capture the flag" competition for cyber vulnerabilities. They had machines like this one scanning software for vulnerabilities, for ways to hack in and exploit them, and if they found one on their own computer networks, the machine could all by itself patch that vulnerability, fix it up. But if it found one in another network, it would exploit it. It would attack, and they were competing against each other. This was the winner of the competition.

These systems are good. They are not better than the top human hackers in the world, but they're in the top 20. So they're good enough to be used in a variety of settings. In fact, the Defense Department is using this company's technology to deploy within its own computer networks to help secure them. So we're seeing more autonomy not just in physical systems but in cybersystems as well.

A really important issue that this raises: What about the laws of war? There has been a growing movement of non-governmental organizations arguing that these weapons would be illegal under the laws of war. What is interesting is that the laws of war don't actually specify that a human has to make any targeting decisions. What the laws of war talk about are certain principles of effects on a battlefield. You can't deliberately target civilians. The laws of war acknowledge that some civilian deaths might happen in war—collateral damage—but that can't be disproportionate to the military necessity of what a military is trying to do.

These are in some settings very high bars for these kinds of weapons. If you wanted to deploy a weapon like this in, say, a congested civilian environment in a city, this would be very challenging. But there are other settings where you could use these weapons today that would be lawful, for example, if you were sending an autonomous ship undersea to target submarines. If it's a large metal object underwater, it's a military submarine. It's not a hospital, not a school, not some other object. It might be your submarine. That's a problem. That's a big problem for militaries using this technology. But the environment in this case makes the legal issues better.

One of the open questions, and one of the much debated issues in this technology is as the technology advances is it likely to get to a place where these machines will be smart enough to be used in ways that comply with the laws of war in other settings, and some people argue that is possible, that just like we're going to see self-driving cars reduce deaths on roads and highways and self-driving cars will be better than people, some people argue that these machines might be better than humans at war. Humans make mistakes, humans commit war crimes, people are not perfect.

Others have argued: "It's too hard. That's science fiction, and we can't take a chance, and we should ban these weapons today."

Another important question is the moral and ethical issues surrounding these weapons. This is in many ways much more interesting but also much harder to grapple with because the laws of war are written down. We can all agree on what the laws of war are. We might disagree on interpretations or how hard it would be for the technology to comply, but we know what they are. There are many different competing ethical theories, and there are some ethical arguments saying, "Maybe we should use these weapons because they could save civilian lives, and that would be good," and others saying, "Not only might they cause more harm, but there's something fundamentally wrong with a machine making these choices."

I want to tell a story about my time in Afghanistan that illustrates an important point about some of these ethical dimensions of warfare. This was relatively early on in the war in Afghanistan. I was part of a Ranger sniper team that was deployed on a mountaintop near the Pakistan border, looking for infiltrators, for Taliban coming across the border.

We infiltrated at night, and as the sun rose we found that our position was not as well hidden as we had hoped for. As you can see, the region we were in—this is Afghanistan as a whole, a DoD image—there is not a lot of cover in a lot of these places, not a lot of trees. We were pretty well exposed. It was clear that the nearby village, they knew we were there. We weren't hiding from anyone.

It wasn't long before a little girl came out to scout out our position. She had a couple of goats in tow with her, but it was pretty clear that she was out to check us out. She did this sort of low long circle around us. She was maybe five or six. She wasn't very sneaky. She was looking at us, and we were looking at her. Later we realized, we heard the chirping of a radio that she was on. She was reporting back information about us. We watched her walk around us, and she looked at us.

She left. Not long after, some Taliban fighters came along.We took care of them. The gunfight that happened got the whole village out, and eventually we had to leave, but we talked later about what would we do in a similar situation.

We said that we might try, if we saw a civilian on the mountaintop, maybe apprehend them and pat them down, see if they had a radio, if we were compromised right then or maybe if we had a couple of hours before they could get back to the village and report to someone so we could know what to do.

Something that never came up was the idea of shooting this little girl. It was not a topic of conversation.

What's interesting is, it would have been legal under the laws of war. The laws of war do not set an age for combatants. Your combatant status is determined by your actions, and she was scouting for the enemy. She was participating in hostilities. That makes her an active combatant.

I still think it would have been wrong to shoot this little girl, morally, ethically. Here she is. She doesn't know any better. She has been pressed into this. She doesn't want to be part of this war. But if you programmed a machine to comply with the laws of war, it would have killed this little girl.

So could we build a machine, and how do we make a machine know the difference between what's legal and what's right? This I think was a relatively simple situation, but in other settings it might very difficult moral and ethical choices that people face in warfare.

Then the more difficult question is about what these kinds of weapons might mean for stability among states. This is an image from a flash crash on Wall Street a couple of years ago. We actually have an example about what we might think a domain might look like where you have a competitive dynamic, there is an arms race in speed and automation, and what might result. We have already seen this in stock trading.

Stock trading is largely automated today. We have bots making trades in milliseconds and in a domain where humans can't possibly react in time, and we've seen instances like this where the bots interact in some way that is unexpected. Trading companies aren't going to share their algorithms with their competitor. We have also seen people exploiting the behavior of these bots when they are predictable.

In stock trading what regulators have done is install circuit breakers to take the stocks offline if the price moves too quickly. But how do we deal with this in warfare when you have autonomous war machines interacting at machine speed and you get something like a flash war? There is no referee to call time-out in war. There is no regulator to say, "Let's all hit the pause button." So that's another concern.

There have been ongoing discussions for five years now at the United Nations among states and non-governmental organizations about this technology. I will tell you that progress is moving very slowly diplomatically. If you add up all of the time they have spent diplomatically discussing this, it's about five weeks total. For about one week a year, there is a lot of throat-clearing and preamble internationally. Frankly it took the first year for us to just convince everyone we weren't talking about drones. We get to the end of the first week of discussion, and two countries say, "This isn't about drones at all."

We're like: "No, that's right. This is about what comes next."

"What does that mean?"

"Well, we'll see you next year."

It's moving very slowly. Meanwhile, we've got people developing this technology at a breakneck pace, not just in military labs but in other commercial companies as well.

I want to talk very briefly about the views of some of the senior Pentagon leaders. Whether you agree with them or not, I think it's valuable for people to know what they've been saying on this.

What we've heard repeatedly from this—this is vice chairman of the Joint Chiefs, General Paul Selva. He is the number-two person in the U.S. military. In the foreground, then-Deputy Secretary of Defense Robert Work. He is now out. He is actually at the think tank I'm at, the Center for a New American Security. He at that time was a big champion for robotics and automation and artificial intelligence.

One of the things they've said is that for now humans will be in control of these weapons, humans will be making these decisions. But they have said: "Look, at some point in time down the line, we need to think about what our competitors might do, and if others take the humans out of the loop and that means that they're faster, we might have to think about how we have to respond."

I will close with a quote from General Selva, when he basically says that he thinks we should be advocates for keeping people in control lest we lose control of these robots on warfare. I want to point out, though, he's a military professional, he understands the value of ensuring that the United States is the best on the field, but he still wants people in control. But I will point out that this is a very high-level concept. He is not talking about signing up to a treaty as some have advocated, for banning weapons, or even saying that we need people pushing the button every single time. It is more of a high-level concept about people still being in control of what happens in warfare.

So as we move forward one of the challenges of the technology is how do we find ways to use this technology that might enable us to save civilian lives as we could see happening in other places like cars but without losing our humanity and without losing human control over what happens in warfare.

Thanks very much.

Questions

QUESTION: First of all, I wonder if you'd tell us more about your experience in Afghanistan.

And to what extent does human intel play a role in the effectiveness of these robots?

PAUL SCHARRE: Absolutely. Human intel is a really key part of operations today to better understand enemy networks. If you look at, say, drone operations today, humans are embedded in all aspects of these operations. The machines aren't making any meaningful decisions, but having people on the ground who can give you that information is incredibly invaluable. Within the intel world, people certainly value human intelligence quite a bit.

At the same time, over time, in part because of all the technological advantage that the United States has and all of the data we're creating, an increasing amount of intel has been shifted to the technical side, to technical means, to being able to sift through signals intelligence and data sources.

You think about all of the information that's out there—if you go back 40 years, if you wanted to learn about a person, you had to go through a human means. You had to find somebody who knew them and ask questions about that person. Someone gets close to them. Now you can go on social media, and you get all sorts of information about people. We freely give up information about us to technology companies about what we hit in a Google search bar or what we communicate with on Facebook or put in emails.

So there is an increasingly tremendous amount of data that is out there, and machines are able to help people process that data. It is something that machines are very valuable for. People are still making the decisions, but machines can help sift through all of this and sort it out and even learn from the data and then give that information to people.

QUESTION: Don Simmons. I enjoyed your talk very much.

You mentioned the Russians are already able to jam communications to drones, I assume aerial drones. How about the next step? Are hacking possibilities there that would enable possibly an adversary to take over control of a drone? Has that happened?

PAUL SCHARRE: It depends on the drone you're talking about and who's building it and how sophisticated it is. There is a variety of different kind of attack means you could do.

One thing is just jam its communications. Advanced militaries like the United States have the ability to do protected communications. They can build jam-resistant communications. They are more expensive, they are going to be shorter range, but it is possible to do that. It is just harder, and it is going to be more contested.

It is also in theory possible to take control of drones. People have done this with commercial drones that don't have encrypted communication links.

A couple of years ago, a hacker, a guy in his garage, built a drone that would go out, fly around a drone, and seek out other drones and then hack their communications and take control of them, building this sort of zombie drone swarm that he could build up and take control over. For militaries, those communication signals are encrypted. So it is going to be harder to hack control of them.

But nothing in today's day and age is really ultimately unhackable. It is very hard to do that. One of the concerns is that there might be ways into these systems to take control of them or to at least maybe ground them in some way. If you have a human on board, that is still a risk. If someone finds a vulnerability in a Joint Strike Fighter that has millions of lines of code, it might not be able to take off. That's a big problem.

We've had computer glitches before. The first time that the F-22 flew across the International Date Line it had a sort of Y2K kind of glitch as the time reset when it went across the date line, and it zeroed out most of its computer systems. The airplane almost crashed. The only reason they made it back was because they were with a tanker that didn't have these computer systems, and they could stick with the tanker. But they lost most of their digital systems.

Those are big concerns for anything, but they are amplified in a big way about autonomy because there might not be a human there to take control.

QUESTION: Lynda Richards.

Autonomy for automobiles is progressing at a rate that is very public as far as the timeline, which is 2020 to be mass-produced and out on the roads. I can't believe that the defense side of this is moving less quickly. Is there a reality that there will be time to make any of these moral choices?

PAUL SCHARRE: If you're concerned about the pace of things and that we need to have time to do this, the biggest thing in your favor is the slowness of the defense acquisition system. That is actually the thing that is going to buy you time.

When you look at what's happening in research labs, at places like DARPA or the Office of Naval Research, they are doing very cutting-edge systems. In the defense world, they talk about what they call the "valley of death." DARPA or somebody else builds this really sophisticated technologies, and then there is this giant chasm that they have to cross bureaucratically to actually get into the force in large numbers. Part of that is a lot of bureaucratic red tape. There are a lot of procedures in place that are very risk-averse that make things move very slowly.

But also, some of it is cultural. Just like in other fields, people don't want to give up their jobs. For example, I showed earlier—back up a second—here's the X-47B drone. This is the backside of it on the aircraft carrier. You would think that this is sort of a bridge to some future of the stealth combat aircraft. That's what it was intended to do. That is not the case because the naval aviation community—the pilots—doesn't want it. So even though this aircraft is several years old, it's been shelved, it's been taken out of service. It was intended to be a bridge to a stealth combat drone. The Navy is not doing that.

They're actually building a tanker aircraft that is not intended for combat operations. It is just going to give gas. Because nobody wants to do that. Nobody signs up to be a tanker pilot. That's not a fun job. So that's a place where the people in uniform are very happy to have the robot do that job.

There is this weird dichotomy of the technology is moving forward at a really quick pace in terms of what is possible, but in implementing it some of the people who are most resistant actually are people in uniform who don't want to give up control.

QUESTIONER [Lynda Richards]: But you are describing the U.S. government and not the rest of the world.

PAUL SCHARRE: That's right. You are going to see certainly different things in terrorist groups, where they might see it very differently, and I think in other countries as well they might see it very differently. That's a good point.

QUESTION: W. P. Singh from NYU. Thank you for a fascinating presentation.

Two sets of questions, and both of them in some ways could relate to Afghanistan: There have been certain restrictions on traditional warfare, the nature of warfare, because of geographical constraints. You talked about being exposed in Afghanistan, not much cover.

The one thing with autonomous weapons systems is that geography is almost irrelevant. You can go where human beings cannot survive, deep oceans, mountains, deserts. You can loiter for much longer, etc. We are going to change the entire nature of warfare as we have known it, which until now has been constrained by human ability. What is your response to that?

The second is, you touched on how this was likely to be much more relevant in countries which are at a similar level, let's say highly urbanized, highly wired societies. How effective would this be, again in the case of Afghanistan, where it's not state of the art at all, so in both of those situations?

PAUL SCHARRE: Those are great questions.

This is really going to oversimplify things a bit, but one of the things I hear people is they will talk about "human domains" and land warfare or areas where they are in cities where there are lots of humans around and then "machine domains," places where people either can't go there geographically or it's very hard. An example might be undersea. It's very hard to fight undersea. You have to have a giant submarine. Robots begin to change that. With robots I can fight undersea, I can track enemy submarines. I have to put a vehicle down there, but it doesn't have to be this gigantic submarine that has a place for people to sleep, and it has water and food.

It is hard to keep people alive underwater. That is not a natural environment. Where with robots I don't need all of these life-support systems. That has potentially very profound effects on how you fight and what you can do with the technology, and militaries will readily seize those advantages.

There are other places like space. Most of what goes on in space is I guess robotic with satellites. Particularly, this is the case in cyberspace, which is like an inherently machine domain. It is native to machines, and the machines are interacting at speeds sometimes that humans can't possibly respond to. So you're going to see much more of that I think in those kinds of spaces.

Sometimes when I hear people talk about this, people will be like: "Oh, and that's fine. We're not worried about the machines in those places." I think it depends on what you care about. If what you care about is civilian casualties, then yes, space and undersea is fine. If what you care about are some of these other harms like instability between nations or unintended effects, some of these domains you care quite a bit about.

Like the Internet. Everything is on the Internet. If you lose the Internet, how are cities going to get food? That is a huge problem. Gas pumps won't work. There are a lot of things that we rely upon now. Even though 30 years ago we didn't need all this technology and we didn't use it, we've stripped away those old redundancies, and we don't rely on them anymore. We don't have the same resilience that you might imagine you want to have, actually, if you thought that these systems were very fragile and vulnerable.

Your second thing about the wired thing, I will tell you a story. A good friend of mine who used to work at CNAS and now works in the Pentagon, he was running several years ago a war game that was looking at these different new technologies and robots and things. There were a bunch of U.S. military people and NATO people in the audience as well as some military people from other partner countries that weren't as technologically advanced.

In this case, there was a military officer from a Middle Eastern country, a smaller kind of country, and they were talking about this future scenario and they had all this new technology, and he raised his hand and said: "That's great. You guys have all this stuff. Let me tell you what I would do. I would train my forces to fight in the dark. I would have nothing, and I would jam all of your communications and all of your electronics, and I would make you try to fight in the dark with me, and you guys wouldn't be ready for that."

So this asymmetry I think is really important in both how people might counter these new technologies. But also, in some of these settings the technologies may be not that decisive. When we fought these wars in Iraq and Afghanistan—we still have people in Afghanistan—you can have all this technology with stealth fighter jets and everything else, and it doesn't necessarily translate to this decisive advantage on the ground, and particularly in those kinds of conflicts where we're trying to win over people, that doesn't really help.

JOANNE MYERS: I just want to ask a follow-up question. With the references you make to cyberspace, do you think we should be more concerned about using autonomous weapons in cyberspace and on the ground?

PAUL SCHARRE: I actually am more concerned, yes. Thank you for that.

Sometimes you hear military leaders talk about this, and they're like, "Oh, but in cyberspace we'll just automate it, and that's fine." I am more concerned. I think we're likely to see the technology move there faster because it is native to machines in ways that—actually building a ground vehicle that can maneuver on the ground autonomously is actually really hard.

What people are able to do for self-driving cars is they map the environment down to the centimeter, they know the height of the curbs and where everything is, they know where the stoplights are. The machines aren't just figuring it out on the fly. There is a lot of data that they use to feed that.

In military environments they're not going to have that data. They may not have Global Positioning System (GPS) to know the position of the systems. So it's even harder. Even simple things like recognizing a pothole is very challenging, actually, for computer vision.

But I think we're likely to see it faster in cyberspace. I think the risks are greater. There are greater pressures for automation and speed. So yes, I do think it is a concern. It's something I dig into in the book.

I also will conclude that at the physical level, these ideas of a flash war are probably not that plausible. You might have interactions. You can imagine interactions among robots. You can imagine a future where instead of a Chinese naval officer grabbing the underwater drone the United States had, it might be a Chinese robot, and maybe they start interacting in some way because you programmed our robots like, "Hey, if they try to grab you, shoot it." Then maybe you get some kind of weird effect where they shoot each other. But it is likely to at least happen because of the constraints of physics and the physical domain at speeds that humans could begin to react to.

Are there risks? Yes. But it is entirely different in cyberspace where you could have all these interactions happening in milliseconds, far faster than people can respond.

QUESTION: Jared Silberman, Columbia Bioethics and Georgetown Law. Great to see you. Wonderful talk.

In one of the slides you showed the LRASM. You described it I think as a missile, but it looked awfully close to being able to come back like a drone. I want to bridge that with you. Would that affect treaty law and so forth and how that is?

The second part of that is, getting drones to recognize the signs of surrender?

PAUL SCHARRE: There is a lot of stuff there. I remember from one time in the Pentagon you asked hard questions and mind-bending questions then, and you're doing it now again.

LRASM and then surrender. As the technology is advancing, we are seeing a bit of a blurring of the lines between what we think of as missiles and drones. The Harpy that I showed earlier is referred to as a "drone," but it's one-way. It's not coming back, at least, you don't want it coming back at you. It is more like a loitering kind of missile. The LRASM has these little fins on it. The fins really help it cruise for long periods of time. You think of it like a cruise missile that can be launched from either ships or from aircraft. It is not intended to be recoverable, you really don't want this thing coming back at you. But you are seeing things that start to blend this line, and sometimes it does raise challenging issues around treaties and regulations.

One of the interesting effects sometimes with these technologies that are evolving is that essentially people can write down a treaty based on the technology at the time—and this is very challenging for regulating these—and then the technology evolves in a way that people didn't predict, they didn't foresee.

One of the challenges today is that there is a nonproliferation regime called the Missile Technology Control Regime that is intended to control the spread of ballistic and cruise missiles. It classifies drones as "missiles." It did so in the 1980s, and in an era when drones were not what we think of as today as aircraft. They were basically missiles that were very simple, kind of target drones that were one-way. You'd point them, and they just kind of keep going.

Now, when we move to today, we have drones that look much more like real aircraft. We even have things that people call "optionally manned," where the human could hop in it and fly it or the human could hop out, and it could fly by itself.

The technology is blurring this line, but the United States and people who signed this are stuck with this legacy of the regulations at the time. So it is a real big challenge for regulating how these technologies are evolving.

Surrender. There is an important difference—I didn't get into too much—between autonomous weapons that might target physical objects and those that would target people. Legally both are valid. If you're an enemy combatant, it is okay to target that person.

As a practical matter, it's much harder to distinguish between, say, a human civilian versus a combatant. In many wars people aren't wearing uniforms anymore. Whether they are combatants is going to be based on their behavior. This is hard for people. You might be in a situation where someone is approaching you in an alleyway, and you don't know whether it's some civilian who is coming up to ask you some question or it's somebody with a suicide vest on underneath.

Or you might see somebody who's got a weapon—I would see in the mountains in Afghanistan men who were armed all the time, and they weren't actually combatants. They weren't Taliban, they were people who carried a gun to protect their property against bandits and raiders and other things. There is no law enforcement out there. That was their means of self-protection. That doesn't mean that they're an enemy. You can't shoot them.

If we think about targeting people, all these legal things become much more challenging. One of them is what the laws of war refer to as "hors de combat." It's a French term for "out of combat." What it says is that an enemy combatant is no longer an enemy combatant if they have either surrendered or if they have been incapacitated and are out of the fight. That doesn't mean they're just wounded—if they're wounded and they're still fighting, then they're still in the fight—but if they're wounded and they're out, let's say they're knocked unconscious or they're out of it, then you can't target them anymore. You can't go around killing the wounded.

These are really important rules that put a restraint on the killing and things that would happen in warfare, but they're very hard for machines to comply with. How would they build a machine that could recognize this?

You might say: "Well, I'm going to tell the machine if someone raises their arms like this and waves a white piece of cloth, then they surrender." First of all, how does the enemy know that's what they have to do? How do you effectively communicate that to them? That may not be culturally the same signal that they have for surrender. It might be something completely different.

The other problem is, let's say you tell the machine to do this. Now, once I learn that, all I have to do is: "Okay, I've surrendered. Here I come. I'm still coming, I've surrendered," right?

Humans can see through these types of ruses. Humans can go like: "They're not really surrendering. That's a fake surrender," and they can figure out how to respond. But machines don't have the broad intelligence that humans have where they can see the broader context of what's happening. We can build machines that can beat humans at poker, interestingly, but they don't do it the way humans do, where the humans have a theory of mind, is this person bluffing? They are just really good at understanding the odds and making really precise bets. So this is a really difficult challenge.

I don't know if it is possible, but one could argue that this is intrinsically just too hard for machines. I think there is a case to be made—I argue this in the book; I'm not saying I'm taking this position, but it's one that is out there—that this is so hard that building machines that could target people should simply be off limits for that reason.

JOANNE MYERS: Do you think that we're innovating in the right direction? Are we developing the tools necessary? Who comes up with the ideas to develop these machines?

PAUL SCHARRE: Again, the big challenge here is there are really brilliant people in the United States as a whole in the defense enterprise. I do think there are many obstacles toward the United States innovating the way that I think we should be.

One of them is that a lot of this technology is happening outside of the traditional defense sector, so it's happening in places like Amazon and Google and Facebook and others. You look at some of the artificial intelligence. That's where all the action is. It's not actually in defense companies. It is a struggle bureaucratically to get these companies to want to work with the Defense Department. There are a lot of self-imposed barriers that the Pentagon creates making it hard for companies to work with them.

But also there are some cultural issues. There was just a big flap a couple of weeks ago where it came out publicly that Google had been working with the Pentagon on a Pentagon project called Project Maven that was being used to use artificial intelligence tools that Google has made to process drone video feeds. One of the things that we could do with artificial intelligence is build object classifiers, image classifiers, that could identify objects and that could actually beat humans at benchmark tests for this.

The idea is that you take these tools, you feed them into drone video feeds, and then they can sift through these thousands of hours of video feeds to identify things of interest to people. Instead of having a human stare at these drone feeds for hours on end, you could have this algorithm watch the drone feed, and if you're looking for, say, a certain type of vehicle, or a human comes out of this compound, it would alert the human to watch for that.

It's a good use of the technology. There was a big blow-up inside Google after this came out that this had happened. Eventually over 3,000 Google employees signed an open letter to their CEO saying: "We don't support this. We don't want to do this." This is their own country, and this isn't like, the AI's not killing anyone. It is actually not being used for autonomous targeting. Humans are still in control, and this is being used to target terrorists who are trying to kill Americans. But you have such a cultural divide between Pentagon and Silicon Valley that people say: "I just don't support this. I don't want to do this."

I think it's really interesting when you think about this historically. It's hard for me to imagine something like Ford Motor Company employees standing up during World War II and saying, "We don't want to support Ford working with the U.S. government." That is only one challenge.

Another challenge is this "valley of death" I talked about. There are cultural challenges in importing this technology. So I do think it is a struggle to ensure the United States stays ahead particularly in areas like artificial intelligence.

Others have made very clear that they intend to take the lead in this space. China last year put out a national strategy for artificial intelligence. They are a major player in this space.

Depending on who I talk to on the tech side, they will say either the Chinese are right behind U.S. companies, right equal, or I have heard some very knowledgeable people in tech companies say that the Chinese are a little bit ahead. China has said that their goal is to be the global leader in artificial intelligence by 2030, and they have a plan to do that.

We don't have anything like that in the United States. We don't have a national plan. We don't have major R&D (research and development) into this space. We don't have a plan for human capital, which is what really matters here, things like science, technology, electronics, and math (STEM) education, or immigration policy. In fact, we have an immigration policy coming out of the White House that actively discourages people from coming to the United States. Why would we do that? We want the smartest people from the world to come here and stay here. Come to the United States. That's one of our best advantages. So I do think it is a big challenge.

I think we have become accustomed for decades to a world where the United States is the world's leader in technology. That is not a birthright. We are in a technology race with others, and if we want to win that race, we've got to compete.

JOANNE MYERS: Thank you for leading us in a fascinating discussion. Paul's book is available. Thank you.

You may also like

MAY 2, 2018 Article

Disengagement Meets the Army of None

Author Paul Scharre presented his book "Army of None" at Carnegie Council on May 1. The book and his talk raise ethical questions about the the ...

CREDIT: <a href="http://farm6.staticflickr.com/5302/5755016315_f22847bb1b_z.jpg">UK Ministry of Defence</a>

MAR 19, 2013 Article

Drones: Legal, Ethical, and Wise?

The U.S. drone program raises serious ethical concerns, particularly about accountability and due process. Congress, with support from President Obama, must develop new oversight ...

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation