Mysterious Machines: The Road Ahead for AI Ethics in International Security, with Arthur Holland Michel

Jun 8, 2020

How do we learn to trust AI systems that we don't understand? What are the implications of this new technology as many nations confront a combination of mass protests and the pandemic?

JOEL ROSENTHAL: Good afternoon, and welcome to the Carnegie Council lunchtime webinar series. Thanks for joining us.

Today's topic is "Mysterious Machines: The Road Ahead for AI Ethics in International Security," and our guest is our good friend, Carnegie Council Senior Fellow Arthur Holland Michel. Arthur, good to see you.

ARTHUR HOLLAND MICHEL: Hi, Joel.

JOEL ROSENTHAL: Arthur is joining us from his home in Barcelona. In addition to his role a senior fellow at the Council, Arthur is now associate researcher at the United Nations Institute for Disarmament Research, affectionately known as UNIDIR. At UNIDIR he is working on the uses of artificial intelligence (AI) in the design and implementation of security systems.

I have been lucky to know Arthur over many years, first as a student at Bard College, and then as the co-founder and director of the Center for the Study of the Drone at Bard. His work at the Center led to the publication of his recent book Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All, which documents the origins of modern aerial surveillance.

Today we want to talk about the growing capacities of artificial intelligence in how these new capabilities are affecting international—and I guess today we will talk about domestic security as well, public security generally. We also want to talk about parallel developments in AI ethics and how ethical issues are helping to shape our understanding of security in an increasingly tech-driven environment.

Before we turn it over to Arthur, I just want to say a word about our format. The first half of the program will be a dialogue between Arthur and me, but the back half will be interactive, so I'm going to encourage you now to use the chat function to post questions as we go, and when we get to the second half-hour Alex Woodson, our moderator, will read questions on behalf of the audience.

So, Arthur, I thought we would start off with a brief discussion of AI ethics. There has been a lot happening in the past few years, and we can have a general discussion about where we are now, what has been accomplished, and what needs to be done. It seems that today in particular, given these new technologies and their uses—particularly with the protests that we're seeing all across the United States and I understand even overseas now—there are some particular applications that speak to your work previously.

Just to kick off the conversation I wanted to start with a brief quote from you at a previous engagement that you had at Carnegie Council, when we were talking about the uses of surveillance technology in terms of protests. This is just a quick quote: "Surely there were times in history when protesters, who had every right to do what they were doing, were referred to by authorities as 'thugs,' and we should be very glad that those authorities did not in every case have access to technologies like this"—meaning surveillance technologies—"that would allow them to act on those beliefs. We have controls in place for this very reason."

I wonder if you could pick up on that and talk a little bit about ethical principles in terms of how they govern the use of new technologies in the surveillance space and also then moving into the AI space.

ARTHUR HOLLAND MICHEL: Thank you, Joel. It's wonderful to be here and to be chatting with you again. I'm so glad that I can be a part of the Carnegie Council to engage in precisely these types of discussions.

Over the last few days I have been spending a lot of time thinking about the present situation and how it relates to these topics. There is a connection on two planes. In the quote that you read out I'm referring to a particular type of technology that can watch a very wide area from the sky and track individuals. What I realized was that this technology could be tremendously effective for, say, observing a convening or protest or demonstration and then tracking individuals who participated back to their homes. People who previously had the right to anonymity in the exercise of the First Amendment, that anonymity can be very quickly pulled away.

I particularly started thinking about that specific application of the technology because an individual who develops this technology, who I spent some time with while writing the book, told me over the course of conversations that he had actually flown his wide-area aerial surveillance plane over St. Louis, Missouri, during the Michael Brown protests a few years ago. He said it was just for a test, sure. But he also told me in separate conversations that he was very much against the Black Lives Matter movement and had even in some cases referred to the people participating in this movement as "thugs."

When I asked him if he would pass the information that he had collected from this surveillance system on to the authorities if he saw something untoward, he said, yes, absolutely. That didn't sit all that easily with me. The ease with which that right could be torn away by this new technological reality was something that I had not contemplated before.

That's why I say there are two planes here. The first plane is that these movements are happening in a technological reality that is unprecedented in terms of the volume of data that can be collected and the ways that that data can be processed. It can be compiled so that you don't just have one source of data but you have many sources of data and you can piece together very detailed portraits of those you are observing from this information that is widely available.

There are technologies that we don't necessarily know all about; there isn't much transparency as to whether they are applied and employed, but what we do know is that the ease with which those rights can be intruded upon is totally unprecedented. That's why I think it's relevant to look at these things in the context of what's happening.

The second plane of thought that comes to me around what's going on is that this present situation shows to us just how profoundly far behind in the striving for equality we actually are, that society remains profoundly unequal and unfair, and that those inequalities are systemic, and they cannot just be pasted over. When these technologies emerge in the context of a systemically unequal society, then all the more care is needed to ensure that abuses don't happen because these technologies—as I said a few minutes ago—so easily lend themselves to those kinds of abuses and because they raise real questions that are problematic precisely because of inequality.

I think a lot of the issues that we are going to be talking about over the course of this conversation, specifically with regard to AI, wouldn't necessarily be issues in a totally equal, egalitarian society, but we see today and in the last few days that we don't live in such a society, and that's precisely why we need to have this conversation. That's why this conversation is more urgent than ever, because if not, these technologies could perpetuate those inequalities, and that's the last thing we need.

JOEL ROSENTHAL: I'm going to follow up a little bit on that, and then I know you want to get to the broader area of AI ethics and the principles that are being established. This question goes to inequality, but it goes more to a question of power and who has it.

In this previous engagement at Carnegie you also mentioned that you were able to look at a Pentagon manual in terms of how they think about surveillance, the principles by which they're thinking about surveillance. Again, this is in the national security domain, but at the principle level, and the surveillance idea is not just to have the ability to watch but to give adversaries the sense that we know even their intent so that they are always looking over their shoulders, the idea being that those who have the technology not only have the ability but the idea that others know that they are being watched, and the power that it gives, first to the government, but then also to commercial enterprises or others who have the technology. You mentioned this private citizen who has it.

I wonder if you could say a little bit more about that potential in terms of those who have the technology to use that power, whether it's government, commercial, or private.

ARTHUR HOLLAND MICHEL: That goes to an interesting thought. There is a lot of hype in the world of AI. In some cases that just leads to disappointment—this robot that we thought would be able to play soccer it turns out can't do that—but that hype has troubling implications in the security sphere. I speak to people, and they assume that there are artificially intelligent drones flying overhead at any given time and that these systems can do a lot of things that perhaps they cannot.

In a sense there is an AI hype element there that is not to be discounted though, because there is something very intentional about that purpose, to give the adversary a sense that you can see everything. A lot of these systems that I wrote about in the book—which are primarily military systems—have names like "Gorgon Stare" or "Constant Hawk."

And, yes, as you mentioned, that was a military handbook, but as we are seeing all too clearly there is a tendency for not only technologies but also techniques to creep back from the battlefield into the domestic space and to blur indeed the line between the domestic space and the battlefield.

An example of that is that the company that makes the Gorgon Stare surveillance system unveiled a few years ago a civilian version for law enforcement that had pretty much the same capabilities, and they called it "Vigilant Stare." So they're not necessarily trying to roll back the effect of giving this sense of panopticism, this sense of being able to see everything.

Again, if you combine that with the notion that we don't really know what these systems are able to do—maybe there is some hype, but these systems are very, very powerful even without the hype accounted for—then it does give you a sense that maybe they are watching, and maybe you shouldn't show up to a demonstration. That potentiality is very dangerously close to infringing upon enshrined rights.

Again, this technological reality that we exist within exacerbates that effect, I would say, because now one police officer sitting at a console has access to all of these tools. It's not the person down the street who may be informing on you. There is a long history of old-school techniques of creating panopticism, but the efficiency with which this technology can enable that sort of thing, either real or imagined, can have a profound psychological effect that I think people should be talking about.

JOEL ROSENTHAL: Arthur, I would be remiss if I didn't mention, when you say "real or imagined," I think we're in a real space right now when the president has used the language of "dominate the streets." This is coming from the president, and standing next to him was the secretary of defense, who talks about battle spaces and so on, and who is standing next to the chairman of the Joint Chiefs, who is standing next to the attorney general. We can talk about this at a level of high principle, but these things are happening now. Given your research and experience, you can see also the connection between the national security and the domestic public security and how these technologies straddle both.

I wasn't intending to go there, but is there something that we could share with the audience about that relationship? I guess that goes a little bit to the commercial enterprise as well, but how do you think about the development of these technologies? I think many of them started in the national security realm, but we're now seeing this movement into public security in the domestic area.

ARTHUR HOLLAND MICHEL: It's a very real pipeline, a pipeline that is perhaps exacerbated by the fact that the kinds of battlefields that are predominant and preeminent today—if you squint—are quite similar to the kinds of domestic environments where law enforcement operates. That was not necessarily the case during the Cold War, when the types of battles that countries were by and large preparing for involved tanks and maneuver and strategic-level motions, whereas now you think about things like counterinsurgency and this blurred line even in those theaters between policing and military action.

On the technological side there is the fact that there is less of a divide, specifically when you talk about software and surveillance, between what is possible and applicable in the military realm and what is possible and applicable in the civilian realm. Cameras operate the same whether they're operating abroad or domestically—you may want to analyze data in the same way domestically as you might want to abroad—and the code shares a lot in common. In fact, you may be able to take those same capabilities and apply them in your environment, again which is not something that was necessarily always the case. It is all the more the case when you are just talking about software or you're talking about the cloud.

In that context specifically and all the more so in light of the way sometimes domestic space is referred to in law enforcement, in terms that very closely mirror the way the battle space is being referred to, it is definitely grounds for pause. This is nothing new. We have been seeing this expanding pipeline from the battle space to the civilian space for the last 20 years or more, and we are starting to see the culmination of that, but that process will probably get faster as the difference between those two spaces and the technologies themselves diminishes.

JOEL ROSENTHAL: Thank you, Arthur. I want to now move into the general conversation we were going to have about AI ethics and where it is.

Maybe you could say a little bit now—moving into AI specifically—about the ability of artificial systems and so on in terms of their ability to actually think and make decisions, and what principles we need to think about in terms of creating those systems, and where we are now, because I know there are many initiatives to begin to look at that.

ARTHUR HOLLAND MICHEL: You're absolutely right. The debate on AI is nothing new. In fact, these debates have been going on for quite a long time in all sorts of different fora.

There has been tremendous progress in these debates. What we have observed in the last few years is that these debates have started to coalesce in a profusion of ethical principles for AI, and that goes across the board among governments, private institutions, local and city governments and municipalities, and military entities. It seems like almost every week that someone is announcing their AI principles.

I'm not going to go through all of the principles that exist. I should also note that I specifically look at the international security realm. A lot of these principles are broadly declared for AI in all kinds of verticals—loan decisions, medical systems and diagnostics, you name it.

Of specific relevance I think to the international security realm a few principles have emerged. One is this universal principle that AI has dangerous potentials, and so it must be used for good. There must be a proactive decision to use the system well, so non-maleficence. There is a fantastic paper in Nature Machine Intelligence that did a meta-analysis of all the different—I think they found 84—AI ethics declarations, and a number of commonalities emerged, including this one quite near the top.

There is also this principle of fairness and inclusivity, this notion that AI systems shouldn't—and this goes to what we were talking about at the beginning of the discussion—be embedded with bias or perpetuate bias, and that has come front and center as well.

People are starting to realize that AI systems can be vulnerable to attack. Beyond that, AI systems are going to become critical infrastructure. They are going to be doing important things, so in that regard they have to be secure, they have to be robust. Everyone agrees that an airplane, because it is vulnerable and can be attacked and is critical, needs to be secure and robust.

There is also this sense that AI needs to be transparent and accountable, that those who operate the AI shouldn't keep their methods and their data and their architectures secret from the people that the AI affects. That's a particularly important one in the security realm because obviously there is a prerogative in the security space from the perspective of the operator to keep some of your techniques and architectures a secret.

Finally—and this one is very important—this notion that AIs are not humans, that the optimal outcome is the symbiosis, if you will, the co-working between a human and an artificial intelligence, that it's not just sending out these artificial intelligence systems to do whatever they may do without regard for potential consequences, and crucially that humans will always be responsible for what AI does. That has become almost enshrined as a principle in this very contentious debate in the international space around military AI systems, and in particular the notion of a lethal autonomous weapon system that can go out and identify targets on its own and engage those targets. There seems to be a mounting consensus that whatever the technology is, whatever it does, the buck always stops with the human.

Those are just a few. I think that's where we are at now in terms of these declarations that we have seen.

JOEL ROSENTHAL: The next step is, how do we operationalize these principles as you laid them out, these issues of accountability, of fairness, and of transparency? These are great. They give us a framework to think about creating a system that will be responsive to principles.

Maybe you could pick an example or two. You could say more about lethal autonomous weapons or some other AI system perhaps that is in some contention. How should we think about making these principles matter in the operating of these systems?

ARTHUR HOLLAND MICHEL: There has been a lot of focus on the idea of a fully autonomous lethal weapon system. Funnily enough, the debate when it truly started about 10 years ago about the ethics, morality, and prudence of using such a machine, we're looking way into the future at these Terminator-like devices. In fact, in a lot of the media coverage of this debate the image at the top would always be an image from the Terminator movies.

That has rolled back in the last few years as people have realized that: one, that is very far away as a technologically feasible prospect; and, two, that there are less ambitious forms of autonomy in the military space that could nevertheless be problematic.

I am particularly interested in this notion of what I call "lethality-enabling autonomous weapons." So it's not the autonomous robotic drone that can identify targets and shoot at those targets, but it's an assistant, if you will, that is operating next to the military operator, and it points to particular places, say, on the battlefield, and says this is where the enemy is and perhaps this is how you should take the enemy out, or if you do decide to use this particular missile, say, this is the likely effect. There is a whole range of still troubling issues that need to be addressed in operationalizing those principles that I talked about in the context of that.

JOEL ROSENTHAL: That's very helpful in terms of thinking about it as a kind of an assistance or a tool. I'm imagining other automated systems. Even when you fly on an airplane most of it is automated, but the pilot is there. I would imagine it, as you mentioned too, in medicine and diagnostics. AI systems are tremendous, and perhaps they outperform humans, and yet we still want to have the doctor to be working with this system in some way.

ARTHUR HOLLAND MICHEL: That's a crucial point, this notion that you don't solve all the problems necessarily by just keeping the human "in the loop," to use the term that is often employed here.

In fact, as my research is pointing at increasingly, the problems only begin when you have this human-machine interaction. It's a very fraught and potentially complex relationship, and it will still need to be addressed in one way or another. Those principles aren't enough.

JOEL ROSENTHAL: Arthur, we have a ton of questions. They are rolling in. But I want to ask one more, just so that we can fully get to what AI is in its essence. I think that will help also to give us a fuller picture of the challenge of thinking about it from an ethical dimension.

When we set this up you mentioned the "black box dilemma," and I wonder if you could share a little bit of your thinking about that. I think that helps to complete the picture in terms of what AI is and why it's so challenging to put it into a moral or ethical frame, and then we'll go quickly to the questions.

ARTHUR HOLLAND MICHEL: The black box, which I should clarify because it's a little confusing, is essentially the opposite of a black box in an airplane. A black box in an airplane will tell you everything that happened in the airplane. When you talk about a "black box" AI system it's like there is a black box somewhere, but it's at the bottom of the sea, and you're never going to find it.

People have started talking about this notion of a black box AI system and more broadly about the notion of explainability in AI. Why is that important? As a feature, AI systems are unpredictable. You want them to be able to observe an unstructured environment, if you will, and know how to negotiate it in a way that's maybe more effective or faster than a human.

As such, we can't perfectly model how they are going to behave. They are predictably unpredictable in a way, and they especially tend to do sometimes in some cases weird things at sort of edge cases. You give an AI system a scenario it has never encountered before, and it may not act the way a human would act in that scenario, which is "I don't know what to do here." The AI system might just barge ahead and do something that a human would think is irrational.

That's why you want AI systems to be understandable. You want them to be able to explain what they're doing so that the operator can know at any given time whether a system is behaving rationally or whether there is perhaps a problem with the data. If that system doesn't give that insight into its operations, it's what is sometimes referred to as a "black box" AI system.

AI has existed for a long time, but the AI that existed decades ago was very transparent. It was rules-based: If this happens, then I will do this.

The kinds of systems that are gaining favor today are much more complex. They're probabilistic. You give them a data set, you tell it what you want it to look for in that data set, and it will run that through a very complex probabilistic process that is very difficult even for a scientist who works in this stuff to explain.

Now the challenge is how to make systems that are either not that—not a black box—or systems that can explain themselves in some way. It's a huge challenge because what counts as explainable to me might not be the same thing as what is explainable to you.

It will also depend on the application. If you're talking, for example, about a law enforcement application, you probably want the system to be very explainable because there are very, very high stakes there. The person operating that system probably doesn't have a Ph.D. in electrical engineering, so finding those balances given especially the fact that the technology is moving so quickly is a tremendous challenge. Actually one research report I'm working on right now is how to define these features. And it's just one example. The lesson to impart from all this is that this is one tiny corner of the challenges of operationalizing some of these principles because again it goes to so many of these principles.

You want to be able to explain the AI system to the person who is affected by it. If it's a convolutional neural network, chances are you're not going to be able to hold true to that principle. So the hope is that rules, norms, and standards will coalesce around that problem in the years ahead, but it's a tricky one certainly.

JOEL ROSENTHAL: Thank you, Arthur. That was very helpful.

I want to turn now to Alex Woodson, who has the big job of weeding through lots and lots of questions. We will take as many as we can over the next 30 minutes or so.

ALEX WOODSON: Thanks, Joel. As you said, this is a very big job today, but these are great questions, so I'm happy to do it.

We'll start with Carnegie Council Senior Fellow Jeff McCausland. He did a webinar a few weeks ago. He has two questions. Both are great. I'm going to stick with one for now.

The question is: "Since the military now talks about cyber and space as warfare domains like air, land, and sea, doesn't the question of autonomous weapons and AI have even more significant implications in these realms as you could have automatic response by a machine in cyber or space that could have catastrophic consequences for escalation?"

ARTHUR HOLLAND MICHEL: That's a great question. I'm going to split the hair a little bit on that and regard space and cyber as very different domains where very different dynamics apply.

There is a growing conversation about the implications of AI in cyberspace, but it's a bit of a muddled conversation because a lot of the agents that already exist in cyberwarfare are at the very least highly automated and will operate in ways that—if you thought in terms of the metaphor of a drone, it's very distant from the operator and may go through several steps without, say, checking back in.

In a way some of these considerations already exist in cyberspace. It's certainly true that when you apply these AI methods in cyberspace you may get further gains in capability, and there may be tradeoffs there in terms of how much control you have on the system. But what we're still seeing in cyberspace is a difficulty in drawing those definitional contours around what would count as a truly autonomous agent in the way that we're debating them in regard to kinetic, physical weapons.

In space it's an angle that is much less explored but probably will be explored all the more with the advent of greater militarization in space. There is a separate program at UNIDIR that has a large initiative on what's happening in space.

From an autonomy perspective space is easier in a sense because there's much less to crash into, and of course there is much greater physical distance and you may have challenges maintaining communication between this agent and the operator. In that sense, at the very least there will be a stronger signal that creates demand for autonomy in space just because of the nature of the environment. Whether the application of autonomy in space creates unfamiliar questions or questions that we have not considered in the debate around ground-based or atmospheric autonomy, that is uncharted territory. That is a debate I would be very interested in watching unfold.

ALEX WOODSON: We'll go to a question that was emailed to us from Ahmedou Ould-Abdallah.

He writes: "With every innovation people are concerned with their privacy, and then they get to live with it. Won't that be the same with 'mysterious machines'?" So asking, will people just be accepting of these new machines in the future?

ARTHUR HOLLAND MICHEL: This is a question that I have grappled with a lot over the years. I always want to be the optimist in these regards. I think I have reason to be optimistic here because the debate around privacy that we're seeing today feels different to the debate a few years ago. It feels like perhaps there has been an inflection point on a broader scale, and that has been very much prompted by some of the technologies that I write about and that I feel are most concerning, things like social media or the application of artificial intelligence to dense networks of sensors, like cities that have lots of closed-circuit television (CCTV) cameras, putting an algorithm on top of that so you can track people better.

While there is a long history of us humans just getting used to greater and greater incursions upon our privacy, there have also been moments where we have said, "Actually, no, that's not okay."

An example that comes to mind immediately is the history of wiretapping. Pretty soon after the advent of the telephone, law enforcement agencies figured out how you could literally physically tap into the cables and listen in on whatever conversation you wanted. In the United States and in most other countries there was a pretty universal rejection of that possibility, and that is why there are strict warrant requirements for phone taps today. Law enforcement agencies can't just listen to whatever phone they want; they have to have probable cause. My sense is that it's just in our nature that when there is an intrusion that is a step too far, we say no.

I'll give you one more case that I think will qualify as being an example of that phenomenon. A few months ago, thanks to the reporting of a reporter at The New York Times, Kashmir Hill, it came to light that there was a company, Clearview AI, that had developed a very powerful, probably black box, AI system that could take any picture that a police agency gave them and run it through this algorithm that would match it to publicly available pictures on the Internet. So, you walk past a CCTV camera, the police want to know who you are, they run it through the system, and it will match with your Facebook profile. There are billions of images in this database.

Very quickly there was an across-the-board rejection of the prudence of employing this technology at large, especially employing it in the absence of very strict controls. A congressional inquiry was launched, op-eds, a lot of media action around it. That again felt like a case where people said: "Actually, no. We get that there are all sorts of intrusions on our privacy and it's not really cool, but trolling the Internet for billions of pictures to match to these images that police can get without any warrant requirement may be one bridge too far." All of that is to say I have reason to be optimistic.

ALEX WOODSON: We'll stay in an optimistic direction with this next question. This is from Christopher Macrae: "Is it possible to find a drone application, maybe AI agriculture, that 8 billion humans can get behind, beyond all the political arguments?"

Along those same lines, Raphael Maretto asks about unmanned aerial vehicles in the Democratic Republic of Congo, I think flown by the United Nations, that are protecting civilians.

ARTHUR HOLLAND MICHEL: There are applications of drones and all of the technologies that we have talked about today so far that I feel like everyone can get behind and that perhaps wouldn't require a rigorous application of the minutiae of ethically driven standards and regulations because we all want this thing to happen, agriculture being a strong contender, the use of drones for conservation, the use of drones for stringing power cables after a disaster. This was something that previously required an engineer to climb up a telephone pole with probably live wires all over the place and the risk of falling. If you have a drone that can wire that along, we can all get behind that.

This has been a particularly pronounced phenomenon in the context of the COVID-19 pandemic. I'm amazed that we have gone 45 minutes without mentioning it, but it was bound to come up. Very early in the pandemic there were a lot of discussions about how can we use AI to address this issue and how can we use drones to address this issue. There was a lot of enthusiasm for those hypotheticals. To be sure, this pandemic has been a terrible thing, and anything that can give us avenues to flatten the curve or make sure this ends sooner rather than later are worth exploring.

There are two things there. One is that the unconsidered use of technologies and applications that are really untested always carries risks. There is a reason that in critical applications we have thorough standards, and it just can't be any cowboy who wants to build a new airplane and fly it in the sky. Maybe it works great, but if it's a new application, there has to be testing. There has to be a level of control because we just don't know what will happen.

The second thing is that a lot of these technologies that are being proposed for addressing the COVID-19 pandemic are very powerful surveillance technologies. There is a concern that agencies will acquire these technologies specifically for the pandemic but then will continue to use them, and the next time there is a protest or a movement that they want to address with some of these same principles of network graphing and such that they will just apply it. That's why there has been a call for "sunset" clauses in some of these authorities. This is something that Alex and I spoke about in a previous conversation.

There is also this potential concern around the normalization of these technologies, that it gets used in a beneficial public health capacity that we can all get behind, and that somehow by virtue of the application itself leads to applications that we can't all get behind. It leads to perhaps public acceptance that requires more scrutiny that it is being given, but what is left in our minds is, "Oh, those drones are really great for addressing the COVID-19 pandemic." There has to be a lot of caution on all of those fronts.

The bottom line is, yes, there are applications we can all get behind, but it has to be a very considered process of getting behind these applications.

ALEX WOODSON: This is from Kwame Marfo: "Which global organizations are best positioned to provide governance in AI ethics?"

ARTHUR HOLLAND MICHEL: That's a question that I wouldn't be able to answer. One thing that has emerged very clearly from the debate so far, and particularly in early efforts to operationalize AI ethics, is that AI is not one thing, it's many, many things. It's not mustard gas, a chemical weapon that does one thing, has one concrete set of dangers, and as a result an international treaty among governments is probably the best vehicle to address it just because of the nature of the thing itself.

What we're seeing with AI is that it can do all sorts of different things, it takes all sorts of different forms, it can be operated by all sorts of different people, and in each of those matrices of application system and operator there are different concerns, different ethical values have different weightings, so there needs to be a more tailored approach.

My answer would probably be that actually there isn't one organization that is best placed, that this needs to be a universal and collaborative process of addressing the very many incarnations of AI, because if you try to create just one standard of explainability for AI systems, you may be doomed to fail because there are different needs in different sectors.

ALEX WOODSON: This is from Geoff Schaefer: "Regarding the lethality-enabling systems, it would be great to hear a bit more about the unique problems that stem from that symbiotic relationship. As an example, one thing that comes to mind is a scenario where the AI system is recommending targets with such rapidity in a kinetic battlefield situation that the human operator may not be able to provide any meaningful sort of control or decision-making guidance to counteract potentially dangerous choices by the system itself. This strikes me as akin to the problem of choice architecture on steroids."

ARTHUR HOLLAND MICHEL: That's a fantastic question from Geoff. I couldn't have said it better myself, that there is one significant issue. It overlaps with another issue around these kinds of systems, these symbiotic systems or human-machine systems, which is this question of trust. A system operates really, really well, and you're supposed to be giving oversight to this system and checking its work as though you're a math teacher. But it does the work so well every time that you just start trusting it, and you start pressing yes, sure, sure, sure. That's good.

Then it does something wrong. In this case you have over-trusted the system. Sometimes people refer to this as "automation bias." Obviously, that's a danger. You don't want people to over-trust systems that need to be supervised closely. That's across the board, and it's a tricky problem in and of itself.

But what happens to that human-machine relationship after that first error? The machine told you with 95 percent confidence that it detected this thing that turned out to be wrong. The next time it gives you a 95 percent confidence assessment are you going to trust it the same way, or is your trust going to swing in completely the opposite direction where you actually have very little reason now to trust the system? And that could be equally problematic because maybe the system is indeed right, and in applications that we can all get behind you would want the system to be right, and you would want the human to pay attention to the system. So that's one, "calibrating trust" is one of the technical terms.

The other—and I'm just going to allude to it because I would love to hear some of the other questions—is all the questions around data.

AI systems are sensitive to the nature of the data that they ingest in ways that are very difficult for us to even get our heads around, let alone for organizations to prepare for: If the data is not good, the AI system won't behave well or it will behave unpredictably; there may be bias in the training data that the system might then perpetuate; the data may be poisoned by an adversary; or the data that the system was trained on doesn't cover all of the eventualities that the system might encounter in the world, and then the system again might behave unusually. So data is another one that goes very much to this human-machine relationship, particularly in situations like Geoff mentioned, where there isn't time to go back to the data itself.

ALEX WOODSON: This is a question that is probably on a lot of people's minds with the protests combining with the pandemic. Amar Adiya asks: "Is it technologically possible to track down faceless people in black?" I guess this means people wearing masks.

ARTHUR HOLLAND MICHEL: It's a difficult question. It very much depends on what the law enforcement agency in question has, how much time and resources they're willing to invest in tracking one particular person. If you want to track one particular person, you can get humans to do that, and it would be relatively easy if a little time-consuming, assuming that you have lots of cameras in the city. If you want to do that at scale, that becomes a vastly more complex challenge, requiring much more sophisticated technology and probably drawing from other data sets.

There has been some discussion about how commercially available cellphone location data from protests can be purchased and that that might give those who purchase it—including potentially law enforcement agencies—a good sense of where people live. But those are very initial reports. It's a very powerful form of data, very intrusive, and highly unregulated that we should definitely be looking at because if there is that kind of tracking of, as you say, faceless individuals, that might be one of the techniques that could be put to that. So it really depends is the answer.

ALEX WOODSON: Time for maybe one more question. This is from Dom McGuire: "On the fifth principle, that humans are ultimately responsible, what are your thoughts on advanced general intelligence (AGI) in, say, 30 years, which may be behaviorally equivalent to humans and may be deserving of some form of moral standing so they could be responsible for their own actions? I raise this as AGI could be more powerful and capable than humans in the future and maybe more ethical in their actions than humans."

ARTHUR HOLLAND MICHEL: I'm going to give a very quick answer to that question, so maybe we have time to get to one more, which is, we have been talking about artificial general intelligence for many years. It seems like it is still very, very, very far off. It is a fascinating philosophical discussion that I do not have the answers to.

The good news is, we will probably have plenty of time to debate until we actually have to contend with that reality. We need to start spinning the wheels there, but we have some time, and I think we should take that time because it is a very loaded question.

ALEX WOODSON: From Lyantoniette Chua: "Are there any debates or existing discussions on how 'mysterious machines' will affect and take effect on existing international humanitarian law? What are your primary thoughts or insights on this?"

ARTHUR HOLLAND MICHEL: Yes, there is a whole process ongoing at the United Nations, which I directly support through my work, of debating exactly that question.

The fundamental question is: Look, we have these existing in many cases very robust laws to govern the conduct of militaries in war. These rules have existed for a long time. Are those rules enough to simply govern AI as well? Or does AI require new fit-for-purpose rules?

Even if you go with the former option, that those rules are enough to govern AI, there is consensus within this group of experts that is debating this at the UN level that there is still going to be a process of operationalizing the adherence to international humanitarian law in the application of some of these advanced autonomous systems. So, even if you do not have to reinvent the wheel and create totally new rules, there will still be a process of fitting those rules to these new technologies, which is a very natural process for many new technologies that have emerged over the years.

But I highly encourage you to look at the work of the Convention on Certain Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapon Systems, and you'll find a ton of resources on what that debate looks like so far and what it will look like in the next few years.

JOEL ROSENTHAL: Arthur, that was terrific. It was a really fast hour. I hope we will have a chance to continue this conversation in a future webinar or even better in an in-person meeting. Your work is really specific evidence that ethics matter. Everything that you have said points to the importance of establishing principles to help in the governance, regulation, and uses of these technologies. It's a vast topic, and you have given us a lot to think about and a lot to follow up on. I want to thank you for that.

I also want to remind the audience that we do record these webinars, so they will be available on the Carnegie Council website and also on our YouTube channel.

We will be doing more events in the future, and our next program will be next week at a special time. We will be convening on Tuesday evening, June 9, so that I can speak with our guest, who is in Australia—it will be Wednesday, June 10, for him. This is Christian Barry, who is at Australian National University in Canberra, and we are going to be discussing his new article, which is posted on the website for our Ethics & International Affairs journal, called "Justifying Lockdown." I hope that many of you can join us for that event and for our future events. We will be trying to do at least one a week into the foreseeable future.

Thank you again, Arthur. Thank you to everybody watching and listening, and we will see you next week.

ARTHUR HOLLAND MICHEL: Thanks so much, Joel.

JOEL ROSENTHAL: Thank you.

You may also like

CREDIT: <a href="http://www.flickr.com/photos/arselectronica/7406755166/in/photostream/">Ars Electronica</a> (<a href="http://creativecommons.org/licenses/by-nc-nd/2.0/deed.en">CC</a>).

AUG 1, 2013 Article

Policy Innovations Digital Magazine (2006-2016): Briefings: The 'Copter Will See You Now

As countries develop policies for civilian and commercial drones, it is important to apply ethical standards that are permissive of innovation.

CREDIT: <a href="http://farm6.staticflickr.com/5302/5755016315_f22847bb1b_z.jpg">UK Ministry of Defence</a>

MAR 19, 2013 Article

Drones: Legal, Ethical, and Wise?

The U.S. drone program raises serious ethical concerns, particularly about accountability and due process. Congress, with support from President Obama, must develop new oversight ...