The Promise & Peril of AI & Human Systems Engineering, with Mary "Missy" Cummings

Oct 8, 2021

In this episode of the "Artificial Intelligence and Equality Initiative" podcast, Senior Fellows Anja Kaspersen and Wendell Wallach are joined by former U.S. Navy pilot Mary “Missy” Cummings, a professor at Duke University, director of the school’s Humans and Autonomy Lab, and a world leading researcher in human-autonomous system collaboration and robotics. The conversation touches upon the maturity of current AI systems applications and key conundrums in AI research to make sure humans are not a design afterthought. 

ANJA KASPERSEN: Today we are joined by the multitalented and extraordinary Missy Cummings. It is really hard to describe Missy in a short paragraph, but we will provide a few highlights. She is currently a professor at Duke University and the director of Duke's Humans and Autonomy Lab. Missy is regarded as one of the world's leading researchers in artificial intelligence (AI) and human autonomous system collaboration.

However, before her many feats in science and academia, Missy was one of the United States Navy's first female fighter pilots, a role she held for more than a decade. Missy's name is known to most folks who work in the domain of AI and human-centric systems engineering. She has served as an expert for countless high-level government and industry processes, earning a reputation as someone who always speaks truth to power with deep scientific and social intelligence and her no-nonsense approach to building greater AI fluency.

Missy, Wendell and I have really been looking forward to this conversation with you, and we will get to the big issues around AI and autonomous technologies, but first, I have heard you speak several times of how Top Gun inspired you to seek out a career as a fighter pilot, and in 1994 you became one of the first women in U.S. history to be selected to fly the F/A-18 Hornet, which at the time was one of the most technologically advanced fighter jets. I just finished reading your memoirs, called Hornet's Nest: The Experiences of One of the Navy's First Female Fighter Pilots, and what a journey, Missy! Tell us more.

MISSY CUMMINGS: It's funny. Looking back, it was a lifetime ago literally. I call it "the best of times and the worst of times." I loved being a pilot. I loved flying high-performance aircraft. I was really in my element. When people talk about being "in the flow," it's amazing when you're in the flow with a high-performance aircraft and you are able to manipulate it to do the things that you want it to do.

But kind of the flip side of that, the opposite of being in the flow, is being in a social milieu where people are not happy that you're there, they don't want to talk to you, and you're not really part of the camaraderie of what it means to be a fighter pilot. It was a slow realization for me that I could be the best pilot that I could be and I could try my best to serve my country, but if the other fighter pilots did not want me on the team, there was nothing—I could be the best pilot I wanted to be, but nobody's a good fighter pilot by themselves. You need a team of people.

It was that realization—and it was a bitter pill to swallow—that it just wasn't going to happen. I wasn't going to be part of that team. There were a lot of reasons for that, and that's what prompted me to start looking around and looking at other options in my life. At the same time, there were so many people dying that I knew, all because of bad human-machine interaction, that it was an inspiration for me to start looking at ways that I could formally study this and try to change what was obviously a problem.

ANJA KASPERSEN: I remember this line from Top Gun, which I think was by Maverick, the Tom Cruise character, and he says: "You don't have time to think up there. If you think, you're dead." Dramatic as that may sound, I know this is what partially prompted you to transition from your career in the military to thinking more deeply about participatory design and human systems engineering and robotics to make sure that humans are not just a design afterthought.

MISSY CUMMINGS: Yes. What's funny is that I was really motivated to move the Department of Defense away from designing for the human or engineering any part of the system for the human well after a system to be deployed. It turns out that, as autonomy and automation have grown in other parts of society, that motivation is also really important in many other aspects of life, not just military aviation but particularly commercial aviation. And then, as we have started to see autonomy increase in vehicles, passenger vehicles, trucks, even rail, there is still so much to be done in the transportation industry but also outside of transportation. I think medical systems and the autonomy that may or may not be appearing in medical systems in the future is also going to be a significant extension of this work.

ANJA KASPERSEN: Before we continue, Missy, I will ask you, just for the benefit of our listeners, to quickly differentiate between autonomy and automation, both of which you have alluded to already, which often get confused. Could you explain the difference?

MISSY CUMMINGS: Yes. There is a lot of confusion, and there is no crisp boundary. Whether or not you are an automated system or an autonomous system is really more of degrees of freedom.

An automated system is one that works by clearly defined rules. You can think of a computer program algorithm that is kind of an if/then/else—if your thermostat sees a temperature lower than what you set, it is going to kick on the heat until the heat rises to that point, and then the thermostat is going to kick off, and your system is automatically changing your house temperature for you.

Autonomy is the extension of that where you are thinking more probabilistically: Is the temperature lower? What time of day is it? Is the owner of the house on a regular schedule? Does the owner of the house have a sweater on? If the owner of the house has a sweater on, maybe the owner doesn't want the thermostat, so there's a lot more reasoning and guessing behind what the autonomous thermostat, for example, would be opposed to the automated thermostat. There are fewer variables for automation, they are much more clearly defined, and in autonomous systems there have to be probabilistic guessing and estimations based on a set of input variables and maybe not always a clearly defined output set of variables.

Where I tell people to draw the line is really anywhere inside a particular system where the uncertainty is growing to a point where it can be costly. So, if your thermostat isn't set correctly, and let's say it's an autonomous thermostat and it guesses wrong, then maybe you just turn it up or turn it down, and that's the biggest consequence of what happens. But if you're in a fighter jet that has autonomy in it and it guesses wrong about what the target is, or you're in a car and it guesses wrong that the obstacle that it's seeing in front of it is not a pedestrian, then these guesses can be fatal.

WENDELL WALLACH: Missy, you're clearly a leader in illuminating challenges arising in transportation, particularly with the advent of increasingly autonomous vehicles, but you have also publicly warned against allowing any type of remotely controlled vehicles driving on public roads. What are your concerns?

MISSY CUMMINGS: The remote control of a surface vehicle, road vehicle, whether or not we're talking about a passenger car or a truck, falls into this world of automation and not autonomy, so there is really no fancy probabilistic reasoning.

You can think of remote control of a car like just an extension cord. There are a lot of problems when you introduce that extension cord, problems that have been underscored in blood in American drone operations. One of the things that we found out early on is that when the U.S. Air Force started their drone warfare program, pilots were very mad that they were being taken out of the loop for the drone flights, and these are pilots who sit in trailers 4,000 miles away from their target. So they decided to start letting them take off and land the vehicles to make the pilots feel like they were needed, even though the automation could take off and land much better than humans could. This was still a way to let pilots feel like they were being pilots.

Then we found out really in a very expensive, long set of lessons learned that when you put a lot of time and distance between that remote control and you get network delays that is a problem, but you also on top of that get what we call human neuromuscular lags. Humans take about half a second to see a problem and then react to that problem for very simple, just like pressing on the brake, and even a half-second neuromuscular lag, both in aviation but especially in surface transportation, can be deadly because a lot of really bad things can happen in half a second on a roadway, particularly a congested roadway.

So it turns out that with remote-control vehicles, particularly surface transportation—it's especially acute for cars and somewhat acute for planes, but way worse for cars—you just don't have the time for a remote operator to understand a situation that is developing and respond correctly, particularly with an impoverished information set, meaning seeing the world through two-dimensional displays, and even if they have a whole wall of displays, it's still 2-D and it's still not the same fidelity of information coming into a driver who is physically there.

WENDELL WALLACH: So there is an inefficiency gap on top of what a human driver would confront.

MISSY CUMMINGS: Right. There are delays in the network, delays in the human response, and delays due to a lack of good-enough information, so that is all pretty deadly.

ANJA KASPERSEN: Many years ago when we met I asked you if you would ride in a self-driving vehicle, to which you responded, "Only when you remove the steering wheel."

MISSY CUMMINGS: I'm not even sure. Maybe I've updated what I say. I'm not getting into any self-driving car anytime soon, and I say this all the time.

I work in artificial intelligence on a daily basis. I teach students, I grade their papers, I see their projects, and it's not pretty. It's not pretty because they make mistakes that are very typical mistakes but easy-to-miss mistakes and even mistakes that they don't even realize in terms of maybe parameterizing a model. The don't really see them like mistakes. It doesn't seem to be a big deal, but then it has a big-deal effect later. So I find that there are so many points of subjectivity in the creation of AI that even designers—even very famous AI statisticians, engineers, and computer scientists—are still making choices about how they design AI that ultimately can have a very deadly impact on how that AI is used.

Where I think that we find ourselves is in this really weird world where AI is seen as a mathematically objective tool, but indeed it is an incredibly socially constructive, subjective tool, but people are not willing to admit that because they don't want to admit that they are making choices as opposed to making decisions based on hard data.

ANJA KASPERSEN: Let's talk a little bit about the AI conflicts that you just described around the future of AI, why we should and how we can "rethink the maturity of AI in safety-critical settings," which is also the name of a recent article of yours about the limitations of deep learning. You caution in this paper that there is a risk of a new AI winter due to the increasing backlash over AI and privacy and at least the mindless embedding or "fake it 'til you make it" mantra guiding immature AI applications deployed into our day-to-day lives.

You write: "In current formulations, AI that leverages machine learning fundamentally lacks the ability to leverage top-down reasoning, which is a critical element in safety-critical systems, where uncertainty can grow very quickly requiring adaptation to unknowns." You argue that this, "combined with a lack of understanding of what constitutes maturity in AI-embedded systems, has contributed to the potential failure of these systems."

I wonder if you can elaborate for our listeners what you mean by "maturity in AI-embedded systems," maybe with some examples as well.

MISSY CUMMINGS: When I think about mature systems with, let's just say automation at first, despite the fact that there have been problems with drones, we have also had more than 30 years to iron out the problems, and now drone operations are very mature. We quit letting pilots take off and land. We recognized that that was a problem. We put in the automation. We have some supervisory humans, who are effectively coaching the system as opposed to trying to directly control the system, and we have understood the problems with the communications lag and where that can be a problem, and we avoid those areas of operation.

So now, 30 years after the technology was introduced, we can say it's fairly mature because we know its failure modes, we know how to prevent the failure modes, and we know how to develop systems that are more effective than others. That is not always to say that we do; we just know how to. And this is why the drone market I think is still evolving because now we are seeing people develop better products, but for the most part the baseline is generally safe enough.

Unfortunately, we are still in the very early days of self-driving cars. What that means is we are still seeing emergent properties, emergent behaviors out of systems that surprise us. If academics are being surprised by the behavior of machine learning, neural net, deep leaning based systems, this is not good, because academics should not be being surprised by a technology that in theory is mature enough for deployment, particularly in these safety-critical systems like transportation.

A lot of the surprises that we are seeing relate to cybersecurity. For example, it is actually relatively easy to trick a face-recognition system or a computer vision system on a car if you know where to put some tape and how the underlying algorithms reason. It's not that difficult for somebody with even just a little bit of knowledge to develop a system that can cause—for example, Mobileye has shown some research where they just had to add a little bit of electric tape to a "35 mph" sign, and it can trick a Tesla's computer vision system to read that sign as "85 mph." The bar is very low for entry to start tricking systems in ways that could cause a threat to public safety.

But there are other problems too. Research that we are doing in our lab right now is looking at if you label the underlying data a particular way, can you actually introduce bias in the outcome based on what is potentially a very subjective way of labeling the data? I think this idea that data can be curated in a way that dramatically affects the outcome is potentially a very big deal, and the fact of the matter is that we are now putting computer vision systems out on the road, and then when people see a Tesla car or the Uber car that killed the pedestrian, when they crash and kill people, some people are just shocked—shocked, I tell you!—that the computer vision system of the cars failed.

I would take the other side. I am actually surprised that they don't fail more often because we know that if the computer algorithms are extremely brittle because of the sensitivity to how the data is labeled, then how much bigger are these problems? Where can we draw the line between how do we know how to safely curate a data set so that it is representative of the real world, there are not a lot of errors in how the data is labeled, and that it is crisply labeled enough whether it's a machine or a human to get the results that we want? These are actually some very basic issues that have never been explored that could have a very big impact on safety. Because of that I would call computer vision, especially in cars, very immature because there are still a lot of questions that we have that we have not answered. I think eventually we will get there, we will have some answers, and we will know a lot more, but we are not there yet.

WENDELL WALLACH: This is a really rich area, Missy, and I would like to push you a little bit further on it. You have been talking about mistakes, but you are also seeming to suggest that part of the problem is in the way we're programming the systems. But I wonder whether you are concerned about the unpredictability of AI systems more generally, that it is something endemic, or the unpredictability of complex adaptive systems, or do you believe that engineers are going to be able to effectively minimize the likelihood of low-probably, high-impact events?

MISSY CUMMINGS: These are all good questions, Wendell, and I am going to put this in the framework of for right now I am strictly talking about deep learning systems, any AI system where there is a neural net. There are other kinds of AI, what we call "good, old-fashioned AI" (GOFAI)—we could talk about that later—but in terms of looking at any artificial intelligence that relies at all, to any degree, on a neural net, it is my professional opinion that if that system is a safety-critical system, the system will never work in the way you say it's going to work if it cannot represent reasoning about the world around it in what we call "top-down" reasoning, in the way that humans do, because there is simply no way to control for all the uncertainty.

The "singularity" crowd will tell you that if they had all the data of the universe and we had a massive computer that could process all the data of the universe, that all important relationships—physics, social relationships—everything could be learned in the data. I find this an amazing deep fallacy.

The way that neural nets work, I don't care whether they're a one-layer neural net or they have 2,000 layers in deep learning, they all just take data that represents the world around them to learn an association of mathematical weights to represent the world around them. That's it. That's how they all work. And if that system does not see a representation that matches those weights, then the system will never be able to recognize that.

There is no inductive reasoning like humans have. There is no ability to see close approximations and see an almost as-if association, so if we've got a fundamentally flawed technique, like I believe that neural net techniques are, I just don't think we can ever get there, and by "get there" I mean we can never have AI systems that can appreciate seeing something new if it always has to match against something that it has already seen.

You can't model the universe in this way because the most wonderful thing about the universe is that it is always surprising us. There is something new. People say, "Shakespeare wrote everything that has ever been written." I don't agree with that. I think humans are very creative, and humans come up with new representations and new relationships, and science is discovering new elements and relationships, not just in our species but in other species. So I think that this idea that you can take approximations of the world around us and represent everything in that way has just zero insight into what the real strengths of humans are.

WENDELL WALLACH: I'm thrilled that you brought in the ideas of the "singulatarians." As you know so well, there is often a tension between those of us who focus on these near-term challenges and those in the AI community who want more attention directed to the existential risks posed by this future possibility of smarter-than-human systems. I deeply appreciate it and witness often how you have been at the forefront of much of this discussion.

Are there other things beyond the difficulty of representing the world fully for a system, its ability to engage in top-down reason, that you want to share with us your thoughts about where you feel concerns that those who are focused on the longer-term challenges overlook or don't understand?

MISSY CUMMINGS: I am the first person to tell you that I think we need to be very, very careful how, when, and where we're installing artificial intelligence, whether or not we're talking about military systems, medical systems, or transportation systems because I see it as a deeply flawed technology, and the people designing these systems don't appreciate where these deep flaws exist and really how deep they go.

That being said, because we have so many famous people who fundamentally refuse to accept the limitations of these systems, I think that we see a lot of people almost like Chicken Little—"The sky is falling!"—because AI is about to take us all over and we're going to go into "the Matrix" and the machines are going to rule. I'm not worried about that at all. I have zero percent of my brain that's dedicated to worrying about runaway artificial intelligence that is going to outsmart us, destroy the climate, and enslave humanity.

What I'm really worried about is all the incompetent AI that is out there, where people are designing these systems. We have very famous people who simply refuse to acknowledge what I have just said, which is: I don't care how deep your deep learning algorithm goes. It cannot reason out an under uncertainty ever, and if you are only ever relying on it to recognize what it has seen before, especially in safety-critical systems and especially in warfare, you are never going to be able to really generalize that to adaptive situations.

That is not to say that there are some places—and I have been very vocal about it—that in the military I believe that some artificial intelligence that can do some pattern recognition about buildings, buildings that exist and buildings that you have lots of images of, can drive the uncertainty down very low that I believe it's actually more ethical to let AI prosecute static targets because the likelihood that the AI will make a mistake is going to be lower than when a human makes a mistake, particularly a fighter pilot who is physically in a plane, being shot at, and under pressure. There are some areas where I think artificial intelligence could actually be a real game changer into more humane forms of warfare.

I realize that is kind of an oxymoron right there, but I think about what just happened in Afghanistan where this car was taken out with children in it. Is it possible that AI could have been developed to recognize children? That's not that hard to do, I think, and why couldn't we layer AI, like a self-check system that if children are detected on any target where there is an incoming weapon, it is actually not that hard to misdirect that weapon to fire into a known safer area that is not populated.

So I think that we should not be throwing out the baby with the bathwater. We need to just be clear about where AI can and can't do the job that we think that it should be doing, and instead of focusing on human replacement we need to focus on collaboration with humans. How can we design joint tasks so humans and AI can work together as a team as opposed to trying to replace humans?

WENDELL WALLACH: You seem to think that autonomous systems should be acceptable for static targets, but it's not clear to me exactly where you stand on more dynamic targets, and obviously the children in the car is an example of that.

MISSY CUMMINGS: On the dynamic target I don't think we should ever let AI—and I think this is going to be true for my lifetime—make a choice to fire a weapon because there is still so much uncertainty, but can we use AI defensively to cause a weapon not to fire? Sure, because indeed not firing is a safe state as opposed to firing. So, yes, for defensive purposes can we use AI in the Iron Dome? Sure, defensively, but offensively I still think we're just not there, and we won't be in my lifetime.

WENDELL WALLACH: Are you concerned with this thing that we are pretending autonomous systems can be patched once deployed? Do you feel that this is naïve, costly, dangerous, and possibly not feasible, as I do, or is that really not your view? Are you really saying, "Well, we can do a great deal of experimentation by putting systems out in the field?"

MISSY CUMMINGS: My feelings are a little bit of all of the above. Look, seriously, patch, no way. If you have fundamentally flawed AI, there is no patch. You just can never patch a fundamentally flawed system, if that is the problem.

I also am a big fan of the idea of doing over-the-air updates for cars as long as you're not doing over-the-air updates of safety-critical elements of cars. For example, if Ford develops a new fuel emissions algorithm that can make your car even more fuel-efficient, and we can just upload an algorithm and it just slightly changes the air-to-fuel ratio, I am all for that.

Where I stand against that is if you want to update the computer vision system and you think that you have a new pedestrian-detection algorithm and it has never been tested, I think it should totally not be allowed for you to upload your new pedestrian-detection algorithm that has never been tested and use the American public, the German public, or whatever public that it is in to do your testing for you.

When we say "patch" I really think of it more as a software update, but we need to be clear that all software updates are not the same. And I think more importantly what I worry about is regulatory agencies—and this is true worldwide—just cannot keep up with what even that means. How do we even start to address what it means to do a software update in any safety-critical system? How does regulation need to change and adapt where software changes are the big changes instead of hardware changes like we have seen in the past?

WENDELL WALLACH: Before I turn this over to Anja to introduce some other subjects, I would like to just go back to this tension with those focused on longer-term existential risks. I am wondering whether you feel that they are inappropriately dismissing the near-term concerns that you and others are raising.

MISSY CUMMINGS: Yes. I am very concerned about the Future of Life Institute, for example, leaping over the near-term problems with how we're starting to roll out AI because these are real-world problems that touch people's lives. I love H. G. Wells and all the futurist thinking and I am all for thought exercises, but when the Future of Life Institute takes the position that AI is mature enough, that it is just around the corner, and that is going to start selecting people and wiping people out based on its own internal autonomy, what that does is makes people and potentially governments and research funding agencies not realize that there are some more critical pressing areas that need focus.

Maybe now I think we have a glut of people worried about the existential ramifications of AI. I am not saying you shouldn't think about it. Have fun thinking about it, but in the meantime, all of you people who are thinking about existential threats of AI need to start arguing about this over-the-air software update: When is it ethical to do it? How do we draw the line? Where do we draw the line? What about equity? Is it all right for Silicon Valley to develop autonomy that is based on pedestrian-crossing behavior in America, and then we export that to other countries, like China, France, and Japan? These cultures have very different road-crossing behaviors, so there are cultural and equity issues about how we need to develop software to be adaptable to other places and cultures.

Again, fine if you want to have the long-term—I would love to sit at a bar and debate with people about whether or not robots are going to kill us all, but in my day-to-day aspect I need a lot more help looking at the near-term policy ethics of all this incompetent AI running around.

ANJA KASPERSEN: I think those are very important points you raise, Missy. I have heard you say many times that beyond the existential risks we have something much more immediate, which is just human boredom and our inability to deal with automated tasks. Can you say something about that?

MISSY CUMMINGS: Yes. I got into the boredom research when I was doing a lot of work for the military in their drone programs, and we just saw how operators would just be sitting for hours bored out of their minds and looking for anything to do to stave off the boredom. People can be in literal physical and mental pain when they are exposed to long boredom, vigilance, and monotonous tasking. That's what motivated me to get in there.

But then what I recognized is that the lessons learned in the driving world we are starting to see—and this has been true historically. Aviation is the first to introduce automation, and we see all these problems, and then they all eventually a few years later show up in the automotive world. You can see this in spades in all the Tesla crashes.

I am a big fan of Tesla. I like Teslas, except I am not a big fan of full-self-driving or autopilot because the systems give you the impression that they are more capable than they are, and what we know from years and years of aviation automation research is that automation just has to be good enough. It doesn't have to be great. It doesn't even have to be near perfect, but as soon as automation looks like autonomy has some competence and can do a job a little bit, humans have a tendency to get bored very quickly and then give up legitimate authority to automated systems. So we are seeing people crash.

It's almost like a day doesn't go by where I don't get somebody tagging me on Twitter that there has been some new Tesla crash where a person was on autopilot and crashed into the rear end of a fire truck, police car, or the broadside of another truck. Some have been lethal, and some people have lived through them. We have to recognize that this babysitting of automation is a problem in the aviation world, and I do know firsthand that, because the environments are so boring, many of my peers that I flew with in the military—actually I think every man and woman to a T has told me that they wish that they had one what I had done, which is move into academia, because they find that the flying of aircraft can be so boring and tedious.

We've got highly trained people who at least have somebody else in a cockpit to keep them entertained. In cars we don't have that, and people have their cellphones next to them, and they have the neuromuscular lag problem that we talked about before. So now it's like the perfect storm. People have pretty good automation but not perfect, they're bored, their cellphones are there, and "I just need to look at my phone for a little bit," and then something bad happens and I just cannot respond in time.

Boredom is important in autonomous systems because we have to recognize if we don't do something about the way we're designing the job and giving people meaningful work and meaningful activities in that job, that boredom is going to result and then bad things can happen once people are bored.

ANJA KASPERSEN: What you're saying is that boredom in some ways makes us relinquish that control. It's not the trust in the machinery or in software. It is the tediousness that prompts us to relinquish that control in some ways.

MISSY CUMMINGS: Yes, I think it's kind of joint because we see the automation perform good enough—doesn't have to be great, just good enough—that we develop inappropriate trust, and that is exacerbated by the boredom. We want to believe—you can watch many, many YouTube videos where people want to believe—that the cars are far more capable than they are because they want to believe that they can provide themselves with meaningful work, and I think that's kind of the funny thing.

We need to recognize that people—in cars, in process-control plants, and in cockpits—get distracted because they're trying to stimulate their brain. They want something meaningful to do. Now we may not always agree with what their definition of meaningful is, but their brain is craving activity. If we know that and we know that is the end state to pretty-good-but-not-perfect autonomy, then we need to do more to make sure we can at least try to give people meaningful tasks, or, if we can't do that—it's difficult in driving—we can at least design a system to mitigate the consequences of their boredom.

ANJA KASPERSEN: Have you experienced pushback for some of your work and trying to speak truth to power, Missy? Have you been trolled or cyberstalked?

MISSY CUMMINGS: Yes. It's funny. I spent the first ten years of my academic career at the Massachusetts Institute of Technology, and I was doing so much drone work. I got a lot of pushback because people at the time saw this as a horrible military weapon and how dare I do research to help the military war machine. So that landed me in some hot water at the very beginning of my career. In fact, I was on The Daily Show with Jon Stewart, and I'm trying to talk with him about how drones are not just military technology, that their real value is in commercial technology, and he's very funny, he makes fun of me, and we have a good time over that.

At that time, social media and Twitter were just emerging, so there weren't a lot of ways for me to get directly attacked, although I have to tell you I got so many bad, very mean reviews of articles that I would write. I would get reviews like: "Why are you doing this work in drones? It's a niche technology. Nobody cares about it. It's just a military technology. You're barking up the wrong tree."

I know what pushback is from the very beginning of my career, and indeed because drones then commercialized fairly quickly—in fact right after my Jon Stewart appearance—then I slid over into autonomous cars because I knew that this was going to be the next big area, certainly for the research that I do. I have been in that space for about ten years now, and I have been very vocal about the limitations of technology and how it applies to many, many different transportation systems, but obviously with cars we are starting to see a significant uptick in the consequences to people's behavior. It would be one thing I guess if drivers just killed themselves, but now we are starting to see other people killed, either passengers in the car where people make mistakes or in the case of the people whose cars are being hit, like first responders and police.

I am one of the few academics that I am old enough—I call myself a "curmudgeon"—and far enough along in my career that I am totally protected. I have tenure. I can say whatever I want, and because I'm in this protected place I am going to speak truth to power. I am going to tell people: "Look, I want this technology, but I also recognize that the technology is not there yet, and it's probably safer for the public that my 14-year-old eventually learn how to drive as opposed to turning it over to self-driving cars that can't even closely approximate what will eventually be a 16-year-old."

A lot of people push back because they don't want me to basically reveal what—by the way, and this is what is so crazy. Everybody in machine learning, everybody in computer science, everybody who really understands this technology knows I'm right. They know I'm right. They know that this technology is not going to generalize, but there is also so much money that universities get to tell companies and government agencies what they want to hear. Nobody wants to fund anti-innovation. People only want to fund the next new cutting-edge whizbang technology.

I personally think we should be funding a lot more process-oriented research, like, "How do we know when AI is trustworthy?" People don't want to fund that. People want to fund very cool AI that's going to revolutionize the way we get to work. Okay. Well, these two things go hand-in-hand. You really do need to be doing a process at the same as the tech development, but nobody listens to me, and not only that, when I start pointing out areas where things have gone really wrong, like unfortunately Tesla's the one that is out there the most because they have the most advanced cars, but they also have the most egregious examples of bad autonomy in the public's hands. So I use those examples because they exist. I just have to open up my daily email or Twitter feed to find the latest example of how autonomy is going wrong.

What we see is what I would call the "Tesla-Illuminati approximations," the Tesla fans who are really pushing the stock price—and that's my theory, that people who own Tesla stock are really trying to push it up. We've got fanaticism that I used to say borders on religious fanaticism. I think it is squarely in the religious zealot space.

When I start to say, "Look, autonomy designed incorrectly is bad, and by the way, here are some examples, one of which includes Tesla," then I have had to shut my Twitter account down temporarily because of bad-boy behavior. I am constantly besieged at work with complaints—anonymous complaints, mind you—to the point where I am considering what are my personal security options and how might I have to change my life because people are so mad that I'm speaking the truth.

I think this points to where we are in our society in the year 2021. We are having trouble as a society splitting fact from fiction, but even when a researcher speaks truth that everyone who is in the field knows is truth, elements of the public can start pushing back so much that people are trying to silence me, and I do wonder how far is this going to go. How far are people willing to go to silence science that disagrees with their stock price?

ANJA KASPERSEN: I'm sure there are listeners who would also agree on the unacceptability of this type of behavior and attempts to silence, like you said, diverse views in this field.

I would like us to dive deeper into the unique intersection between science, the future of mobility, international security, and next-generation military affairs and warfare, which you have already alluded to, in which you operate, Missy.

A core concept in military affairs and culture is that of command and control. You have written extensively about the potential and real yet often unanticipated impact of AI systems on decision-making support systems for evolutionary command-and-control domains. Can you share with us just briefly your thoughts on this and how this will change the character of warfare and maybe even the nature of warfare looking ahead?

MISSY CUMMINGS: When I look into my crystal ball and I think, How is artificial intelligence going to change warfare?, I think for the most part it just extends ideas that we have already had in warfare, and by that I mean we have been trying to back up the distance that we kill other people in warfare since the dawn of the bow and arrow. As cavemen we had clubs, and we decided that was distasteful, and then we moved back to bows and arrows, and then we moved back to guns and missiles, and we have long-range bombers, and now we have artificial intelligence, where in theory we can launch whatever weapon we want from whatever distance we want, and the vehicles can potentially sort it out themselves for how to execute our plan.

I don't think AI has actually fundamentally changed what people try to accomplish in warfare, but I think it changes how they potentially go about it. I do worry that the step change that we are seeing is not a step change in goals and intentions; it's a step change in competence of people building these technologies, and I mean that to say software development is hard. It's very hard. But companies and even academia treats software development as a necessary add-on feature and not its own entity, and because of that we do not have enough people in leadership positions and in development positions, we don't have people who really appreciate—and this is true in all governments, by the way, all nation-states and all nations' industries—and understand the nuances of what it means to start developing primarily digital systems that occasionally rely on hardware.

Most people think it's the other way around, that we have some hardware and maybe it has a little bit of software in it, but we are at a step change where the real capabilities are now going to be virtual, and hardware engineers, for example, mechanical engineers, I'm sorry, mechanical engineering. You're going to be important in the future, but you're not going to be important if you don't understand how software comes into your system.

I actually think engineering needs to fundamentally change—all of engineering. All engineers should learn how to code, and that is not a popular viewpoint. I'm not saying everybody needs to be a hacker, but everybody needs to understand what it means to code, how code works, how we develop these systems, and where the error states are because they affect every other system. So the step change that I see going forward is that the military nation-states that are going to rise up are going to be those that can command the software, who know what they're doing with software, instead of the countries who still predominantly rely on a hardware-oriented mentality.

ANJA KASPERSEN: So command and control the software.

MISSY CUMMINGS: That is correct.

WENDELL WALLACH: There has been this massive upsurge in AI ethics with principles, standards, and guidelines, and I am wondering whether you perceive there is ethics washing going on, or in your experience have most parties been sincere in their attempt to ensure that their applications are not only safe but function appropriately?

MISSY CUMMINGS: My feeling is that when I think about military applications—I can speak to the United States specifically, and I have also a lot of European peers—I do think that Europe and the United States are trying hard to think through some of the ethical implications of AI in weapons, and that is actually not that surprising because, for example, all the military agencies have been formally teaching ethics always. This is not new thinking to them. We could argue maybe about whether or not their thinking is correct or whether or not you agree with their thinking, but they at least have explicit formal programs in the ethics of AI.

I think the bigger problem is corporate companies, whether they are intentionally sticking their head in the sand or not even realizing that there are ethics surrounding these issues.

ANJA KASPERSEN: Ending where we started with your courage, Missy, in your memoir you speak about the difficulties of being at the vanguard of cultural change, and building on your experience from being at the vanguard of changes in the military and your work in human factors system engineering and as an experimentalist, what are the one or two insights that you can share with those listeners who are or will find themselves being at the vanguard for change?

MISSY CUMMINGS: Well, it takes courage to step up and speak the truth, but we are never actually going to get systems with AI in them to work if people keep ignoring the elephant in the room. So, you have to stick with it. And you have to be at the right place in your career. It's a luxury for me to be able to speak truth to power because I'm in a position where people can't fire me for speaking truth to power. Not everybody has that luxury. But I think at least recognizing what the truth is and trying to design systems that don't fail in these obvious ways is the real challenge for so many people out there who are in the trenches.

ANJA KASPERSEN: Thank you so much, Missy, for taking the time to share your knowledge and insights with us, and thank you to all of our listeners. We hope we deserved your attention and time. And a huge thanks to the team at Carnegie Council for Ethics in International Affairs for organizing and recording this podcast.

You may also like

APR 26, 2022 Podcast

The Promise & Peril of Brain Machine Interfaces, with Ricardo Chavarriaga

In this "Artificial Intelligence & Equality" podcast, Senior Fellow Anja Kaspersen talks with Dr. Ricardo Chavarriaga about the promise and peril of brain-machine interfaces and cognitive ...

APR 5, 2022 Podcast

AI & Collective Sense-Making Processes, with Katherine Milligan

In this "Artificial Intelligence & Equality" podcast Senior Fellow Anja Kaspersen and Katherine Milligan, director of the Collective Change Lab, explore what we can learn from ...

JAN 5, 2022 Podcast

"That Wasn't My Intent": Re-envisioning Ethics in the Information Age, with Shannon Vallor

In this episode of the "Artificial Intelligence & Equality" podcast, Senior Fellow Wendell Wallach sits down with Professor Shannon Vallor to discuss how to reenvision ethics ...