The Pros, Cons, and Ethical Dilemmas of Artificial Intelligence

Sep 26, 2016

TV Show

Highlights

From driverless cars to lethal autonomous weapons, artificial intelligence will soon confront societies with new and complex ethical challenges. What's more, by 2034, 47 percent of U.S. jobs, 69 percent of Chinese jobs, and 75 percent of Indian jobs could all be done by machines. How should societies cope and what role should global governance play?

STEPHANIE SY: Welcome to Ethics Matter. I'm Stephanie Sy.

We are talking about artificial intelligence (AI). It is a vast topic that brings up monumental questions about existence, about humanity; but we are going to focus, of course, on the ethics. In fact, some of the world's top scientists have said recent breakthroughs in AI have made some of the moral questions surrounding the research urgent for the human race.

For this we are joined by Wendell Wallach, a bioethicist at Yale University and the author of A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. Wendell was recently named co-chair of the Council on Technology Values and Policy for the World Economic Forum. So, Wendell, you are perfect for this discussion.

Before we dive into the many hot topics when it comes to AI, I know that the term is sort of an umbrella term that encompasses a lot of different technologies. So will you just give us sort of the beginner's explanation of AI to start us off?

WENDELL WALLACH: Originally, artificial intelligence, which was a term coined in the 1950s at a conference at Dartmouth, largely meant just achieving kind of a human-level intelligence within machines. The people at that conference thought that that would happen within a decade or so. They were very optimistic. They thought the challenge was relatively easy.

It has now become a bit more confusing what the term actually does and doesn't mean, largely because every time a goal is achieved, such as beating a human at chess, the bar gets raised. Somebody says, "Well, that wasn't really artificial intelligence in the way it beat the human at chess, in this case Garry Kasparov, because it didn't really play the way a human chess player would play."

But even the folks in the more advanced fields of artificial intelligence feel today that we are just beginning to have true artificial intelligence, that a lot of what we have done so far is largely automating systems, largely programming them to follow through procedures that humans have thought about in advance.

STEPHANIE SY: That was a great place to start, because in fact one of the trending hot topics in AI is that Google reached what has been called by some "the Holy Grail of AI." They programmed a computer to beat the best go player in the world. That was a game that I played with my grandmother when I was a kid. I was never very good at it. It's very complicated.

Why was that so significant? Why was it a bigger deal than when the IBM computer beat Kasparov in 1997?

WENDELL WALLACH: It's just that it's a more complicated game and a game that takes more imagination in terms of how you play it, how you execute it. It had played a European champion, and then a month later they had this million-dollar challenge with Lee Sedol, who is one of the great Asian players, considered by many to be the greatest player in the world.

The program was developed by DeepMind Google. DeepMind Google is a very interesting company in the field of deep learning. Deep learning is this great breakthrough that has opened up this doorway for people to feel that artificial intelligence is finally coming into being. Whereby the term seems to suggest that these computers have very sophisticated learning capabilities, it's really a narrow form of learning that they are engaged in. But what they have been able to do is they have been able to solve some of these problems that have bedeviled computer scientists for decades now, problems in simple perception. So a deep learning computer can actually label all the objects in a photograph accurately. That was something that had not been done hithertofore.

STEPHANIE SY: But doesn't a lot of what deep learning involves involve basically statistical learning; in other words, a sort of number crunching? How much of deep learning involves imagination, involves creativity and adaptation in the way we would define human intelligence?

WENDELL WALLACH: Well, whether it really involves human intelligence, that's a very different story. But it does involve what are called neural nets, which is a form of computer programming which tries to emulate a little bit what's going on in the brain, or at least a computerized model of what's going on in the brain. Your desktop computer has one central processor that manages all the computations. A neural net can have many different processes.

The other thing about a neural net is that it is built in layers. We don't necessarily know what the computer is doing in those middle layers. So in this case what deep learning computers are doing, they are analyzing a massive data flow and they are discovering patterns, many of which humans perhaps don't even know about or can't recognize, and therefore they are finding new ways of perceiving, or at least understanding, some of the existing strategy.

STEPHANIE SY: So starting from that, did that victory of AlphaGo over Lee Sedol—as an ethicist, seeing that victory of machine over human in that context—did that cross a threshold for you that concerns you as an ethicist?

WENDELL WALLACH: No, this one doesn't at all. I think it only does in the sense that people get caught up these human/machine comparisons.

The only comparison that really should be made here is that we humans have something that we call "winning the game." And yet, AlphaGo, the software program, won the game. It figured out how to play against Lee Sedol. It won four games to one, and in one of the games it did a move that surprised everyone, that perhaps no human player would have ever seen the connection to that board in that possible move. So that was fascinating.

But I actually sat on a panel with Lee Sedol. One of the things that I noted was, first of all, the computer did not play the game that Lee Sedol was playing. Other than the winning, other than that measurement we make at the end of the game that this was winning, what the computer was engaged in was very different than what Lee Sedol was engaged in.

Secondly, this computer had actually played or studied millions of games of go. A human being, like Lee Sedol, has maybe played tens of thousands—I don't know exactly how many games he has played. But perhaps what's remarkable is, without all these resources and programmers and teams trying to figure out how to make the machine so smart, that this human was able to win one game. He believes he could have won more. He thinks there was another game that revealed a flaw in AlphaGo's programming but he didn't exploit it effectively.

STEPHANIE SY: I think the reason why people like that man versus machine, and they talk about it in that way, is because humans are worried about being replaced to some degree by machines. That is already happening, right? According to a study out of Oxford a few years ago, 40 percent of jobs will be automated by machine learning and mobile robotics.

WENDELL WALLACH: They actually said 47 percent of existing American jobs could be replaced. When their form of analysis was applied to China and India, it was 69 percent and 75 percent.

STEPHANIE SY: Which is concerning.

WENDELL WALLACH: Mind-blowing.

STEPHANIE SY: Yes, mind-blowing—by 2034.

So that certainly must bring up ethical issues around the people who are creating these machines and adopting these technologies.

WENDELL WALLACH: Right. This is a very difficult issue. It's a long-running concern—the Luddite concern going back 200 years ago—that each new form of technology will rob more jobs than it creates. Up to now we haven't seen that. Each new technology eventually creates more secondary jobs than it eliminates.

But I am among those who believe that increasing automation is already putting downward pressure on wage growth and job growth; not only downward pressure, but as we get more and more intelligent machines, we are going to see this rapid pickup. It doesn't take a lot of jobs to be replaced in a year to create a panic mode. But the difficulty is, is this an artificial intelligence problem or is this a societal challenge?

I actually talked with the president of a company that was building robots to basically take boxes off of racks and move them and put them on trucks. His concern was that there were millions, perhaps 25 to 100 million or more, workers in the world who do this kind of work. On one level it's inhuman work. But he recognized "We're going to take jobs."

I looked at it and I said: "Well, I don't know that this is really his problem. Societies are going to have to adopt technologies that improve productivity, and it's not bad that we're taking away inhuman jobs. On the other hand, we have a major societal challenge here: If wages are no longer the main way in which people get their incomes, have their needs provided, then how will they get those needs provided?"

STEPHANIE SY: This is when conversations about universal basic income come up—and that has come up.

I guess it also leads to the question of whether, even if we can automate more, and even if we have the capability to replace professionals with machines and with robots, should we?

WENDELL WALLACH: That's a difficult question. I think every society has to have that debate. Societies are going to differ on that. Unfortunately, as with most ethical issues, there's not a simple yes and no. There are different values; there are different prioritizations of values. So it becomes an ethical issue in terms of what are the options. Ethics is often not about "do this or don't do that." But it's: If you go down this road, then that has those ramifications, and how are you going to deal with those ramifications; or if you go down an alternative pathway, then how will you deal with those considerations?

STEPHANIE SY: They are such important questions, I think, to be asked because this technology is happening now. Truck drivers and cab drivers could be some of the workers replaced. That brings me to the next hot topic in AI, which is self-driving vehicles. Proponents say that self-driving cars could reduce the some 40,000 annual traffic fatalities that we have per year just in this country.

WENDELL WALLACH: That might be an exaggerated figure. The National Traffic Safety Board did a study between 2003 and 2007, and they said that human error was a factor in as much as 93 percent of accidents.

STEPHANIE SY: There have been a couple of accidents already with these self-driving cars. They seem to make people so uncomfortable with the technology it gives us pause. It must bring up the interesting questions about the value of the individual versus the overall good that may come with self-driving cars. Interesting questions for an ethicist.

WENDELL WALLACH: Real interesting questions, and they've gotten posed recently in the form of what are sometimes known as "trolley car problems."

STEPHANIE SY: Explain the trolley car problem.

WENDELL WALLACH: The trolley car problems have been around since 1967, first proposed by a philosopher, Philippa Foot, but they proliferate into hundreds of different problems. Basically, they are problems where you are asked whether to take an action that could perhaps save five lives but it would cost another life. Traditionally, it is you throw a switch that redirects a train to a different track, but one person still dies.

Suddenly, these are getting applied to self-driving cars and questions are being asked: What should the self-driving car do if it is going to hit a bunch of children, but it could, rather than hit those children, drive off the bridge and kill you, the passenger in the car?

There has been recent research done on this. There were some articles in Science magazine in June, where generally the public felt that the car should do what would kill the least number of people. On the other hand, most of those people also said they would not buy a car that would kill them or the other occupants.

So now, in order to save a few lives when a once-in-many-trillion-times incident occurs, millions of people do not buy self-driving cars, and then we have thousands of losses of lives that would not have occurred otherwise. So it's one of these interesting things where the short-term ethical calculation may actually come into conflict with the long-term ethical calculation.

Now, I have made the proposal that there is no right answer to this problem. It's a totally new situation. You need to have many stakeholders sitting down and establish new societal norms. And frankly, I think, you don't program the cars to make a decision, even if we could get them to make a decision.

STEPHANIE SY: Even if there was an ethical dial on which you could program the car to reflect your own personal values, you don't think we should have that?

WENDELL WALLACH: No. That's a possible option, and to be honest, I proposed that a few years ago. But if it is going to discourage people from buying a car, or people are going to be afraid to do anything because they would think, "Even though I'm not driving the car, am I responsible if I have totaled the car, to kill other people rather than to kill me?" It sets up a very interesting ethical quandary for the individual who has to make the choice.

But my concern is there is, in this case, a utilitarian calculation, meaning the greatest good for the greatest number, but the tension throughout history has been through utilitarian calculations that violate individual rights. So this would be a real tough thing for a society as a whole to make a decision that maybe we aren't going to make that individual rights decision because in the long run it really has other ramifications.

STEPHANIE SY: Individual rights and maybe national rights, because let's talk about another big topic when we talk about AI, which is lethal autonomous weapons. Does that question of the value we place on ourselves or our own country versus the greater good, humanity's greater good, also play into a discussion about autonomous weapons?

WENDELL WALLACH: Certainly. So there is this movement to ban lethal autonomous weapons. That basically means any weapon that can select its own targets and kill them, or at least decide who is the target among groups of people that it is looking at.

There are all kinds of moral goods that come from this. There obvious moral good is it could save some of your soldiers. On the other hand, if it saves some of your own soldiers, it could lower barriers to entering new wars. All kinds of aggressive countries will be aggressive if they don't feel like their citizenry is going to rise up against them because there will be a lot of soldiers' lives lost.

When we invaded Iraq, we had almost no loss of life at all during the invasion itself. But here we are—

STEPHANIE SY: —in an insurgency in which soldiers' lives are still being lost.

WENDELL WALLACH: And it has concatenated into perhaps as many as 300,000 lives lost in Syria alone. So sometimes you can enter into a conflict because you think there is a moral good, or at least you can save your own soldiers' lives, but if you are lowering the barriers to starting new wars, or if you have machines that could unintentionally start a war, or that you start to have flash wars the way we have flash crashes on Wall Street because machines are suddenly reacting to each other—my lethal autonomous weapons versus your lethal autonomous weapons—in ways that we don't even know what they are responding to, but in that flash war 1,000 lives were lost, you have some real concerns going on.

So a number of us, actually a pretty large community, have begun to support this "Ban Killer Robot" movement, and the UN has been meeting for three years already in Geneva with expert meetings, and those will continue over the next few years, to see if it is possible to forge an arms-control agreement that any countries would sign onto, that nearly all countries would sign onto, to ban this kind of weaponry. It does not meaning banning advanced artificial intelligence, but it means banning weaponry in which humans are not in the loop of decision-making in real time, that they are there when the crucial decision is made, not just that they delegated a decision to the machine hours or weeks beforehand.

STEPHANIE SY: Could I posit that there might be a day in the future where we can program machines to be more moral and ethical than human beings?

WENDELL WALLACH: There might be.

STEPHANIE SY: A day where they are more consistent. Certainly, human beings don't always act in the interest of humanity and for the greater good.

WENDELL WALLACH: That's sort of the self-driving car dilemma in one form. I am also known for a book that I co-authored with Colin Allen of Indiana University eight years ago, called Moral Machines: Teaching Robots Right from Wrong. It looked at precisely this question. It looked at the question of how can we design machines that are sensitive to human values and implement them within the decision processes they engaged in? We were largely looking at it from the perspective of philosophers with a bit of computer science. But suddenly, this has become a real challenge for engineers, largely because of these deep learning breakthroughs and these fears being raised about whether we are now going to make advances in artificial intelligence that will eventually have superintelligence.

Because of that, some of the leading responsible members of the AI community, particularly people like Stuart Russell, have instituted this new approach to building artificial intelligence that they call values alignment. Up to now, machines are largely being built to achieve certain goals. The point here was: No, we don't need to be just looking at building our machines so they will fulfill a goal, because artificial intelligence may actually be able to fulfill its goal in a stupid or dangerous manner; but we need them to be sensitive in the ways in which they fulfill their goals, the ways in which they take certain actions, to human values.

Their concern is: how do we implement this within engineering practices within the very forms of artificial intelligence that will be built—not only that they implement values, but how do we ensure that they are controllable, that they are safe, that they will be sensitive to the concerns we have?

STEPHANIE SY: Is there a way—and maybe this is beyond your purview—but is there a way to make sure that we can continue to control the machines versus them controlling us—I mean if they reach a level of superintelligence, or singularity as someone called it?

WENDELL WALLACH: Right. It was called the singularity for many years, and now the term superintelligence has superseded it, only because the singularity got a little bit confusing.

No one knows. It's not an immediate concern, in spite of the fact that the newspapers put a picture of Arnold Schwarzenegger as "The Terminator" on every conceivable article where anybody even talks about artificial intelligence. I'm just amazed at how many pages I've shared with him over the years.

STEPHANIE SY: You and the Arnold.

WENDELL WALLACH: This is all a very distant prospect, and many of us even question whether it is achievable. But to laud the engineers, they are saying, "Yes, but by the time we know whether or not it's achievable, it may be too late to address whether it is manageable. So let's start dealing with the control problem, with the values alignment problem, now."

I applaud this because it also will be appliable to the near-term challenges. People like me, who have been talking about near-term challenges with emerging technologies and artificial intelligence in general, we have suddenly been getting a lot of attention over the last year or two. It's largely because people like Stephen Hawking and Elon Musk and Bill Gates are raising longer-term scare issues around superintelligence. So it's a two-edged sword.

STEPHANIE SY: But it's also because of the actual leaps in the technology and the fact that we do have self-driving cars, we do have AlphaGo. Elon Musk, who has invested in DeepMind, is one of the signatories to that letter that basically said, "Let's be careful of the pitfalls of AI," which is encouraging.

WENDELL WALLACH: Not only that. He put his money where his mouth is. He gave $10 million to the Future of Life Institute for projects that would work on safe AI—robots and beneficial AI is kind of the more generous way of putting it—and people like me received grants to move forward some research in those areas. That speaks to me.

STEPHANIE SY: That's encouraging.

What about government's role, because for now it seems like Google, which they have their own ethics board, I understand, looking into these issues—I don't know if you've worked with them. But does government regulation have a role here?

WENDELL WALLACH: It does, yes. The industry is just beginning to deal with this problem, and I think you'll hear more about that over the next year or two.

But governments are just beginning to look at these concerns. The White House has held some meetings over the past year where they have been looking at issues around AI and whether there are initiatives that should be taken. Many of the international standard-setting boards are looking at whether there are international standards that should be set either in electrical engineering or in other kinds of fields. But this is all very new.

On the one hand, it has opened the door for overly zealous ethicists, like myself, to maybe call for things that aren't fully needed—

STEPHANIE SY: Yet—but maybe soon.

WENDELL WALLACH: —but may be needed, and we need to give attention to it.

On the other hand, there is always a concern that that will spoil innovation, that you don't want to put in place bureaucracies based on laws that look like they are good today but that the technology will make obsolete tomorrow.

So I personally have called for the development of a global infrastructure to now begin to think about how to ensure that AI and robotics will be truly beneficial. That infrastructure isn't just hard law and hard governance—if anything, we should limit that—but look more at soft governance, industry standards, professional codes of conduct, insurance policies, all kinds of other mechanisms. But also look at how we can engineer AI more safely—so what are the engineering challenges; how we can make values, such as a responsible agent or concern for privacy, part of the design process that engineers think through when they implement new technology.

So there is that, beginning to think through: What really can international bodies do, what can the engineers do, what can industry do; what can be handled in other more adaptive evolutionary mechanisms than the ones we tend to rely upon today in terms of hard law and regulatory agency? Those are kind of the first response. But perhaps we need to think through a new, more parsimonious approach to the governance of emerging technology, where we can certainly penalize wrongdoers, but we don't expect to come up with regulations and laws for any conceivable thing that can go wrong.

STEPHANIE SY: And part of that is that there is a potential net benefit to humanity with a lot of the technologies we're talking about. We've been focused on the potential negative effects of AI in this conversation, but there are these net positives in robust AI research that could really benefit us.

WENDELL WALLACH: Tremendous benefits in health care. We've talked about driverless cars. I think that the overall moral good is clear with driverless cars. But that doesn't mean that there aren't some problematics, that there aren't some things that need to be regulated, that governments will need to make decisions about.

All of these technologies are being developed because somebody believes that there is some tremendous worldwide good that can derive from their research. The difficulty is, can we maximize those benefits while minimizing the downsides and minimizing the risks? That is what requires much more attention.

STEPHANIE SY: I know you seem skeptical that we are anywhere near the sentience of robots or making them alive or superintelligent. But as I was researching ethics and AI, I came upon conversations about personhood and robot rights, and whether we need to start having an ethical discussion about how we may treat robots that are increasingly sharing characteristics with human beings. Is that crazy to talk about at this point in the evolution of AI?

WENDELL WALLACH: There are two sides to this discussion. One side is this has been a way in which in law and ethics we have talked about when are entities worthy of rights and when are they worthy of responsibilities, and when might they truly be considered a moral agent that can make an important decision? The conversation has been around. It has been dealt with in terms of children. But we are also now talking about it in terms of certain animals, particularly the great apes. So it has become more a broadening of the legal-philosophical discussion about rights.

But yes, for some people, they truly believe that these entities will have their own capabilities and should be given their rights, though I think sometimes these people worry more about whether we are creating a class of slaves that may never exist than whether we might allow humans to be enslaved by other processes that we put in place, where the human concerns get overridden. So I'm a little mixed on the meaning of that future discussion when it's truly about artificial intelligence.

Now, yes, I have my reserve about whether I think that these entities will be deserving of rights. But I also hope that I have the sensitivity to recognize that if they do have sentience, that we do not turn them into a new class of slaves. So my reserve is more about whether we can truly implement empathy, a giving nature, an appreciation for others and their needs, into machines that are essentially computational beings.

STEPHANIE SY: Do you think it's ethical to program sentience into a machine?

WENDELL WALLACH: Well, I think we are going to have to deal with that challenge. If sentience means the ability to feel pain—you know, it's not like we're going to just get from here to this remarkable sentient being. There are going to be stages. Are we actually programming the ability to feel pain into an entity that is feeling pain and is in agony during the processes we are going through? We might actually have to consider whether there are research rights for these entities we are building. That could turn out to be very unethical.

The real problem with the superintelligence discussion is it is always as if we are going to jump from here to suddenly having superintelligence without thinking through the stages of development between here and there. Those stages of development are going to give us all kinds of inflection points where we may decide "this is a good idea or this is a bad idea."

Take today. We're putting self-driving cars on the road, largely as experiments. This is going to take many decades. But what happens if one of the Google self-driving cars or Google cars that can self-drive on a highway kills another driver or somebody hitchhiking on that highway? You are engaged in a research experiment that that individual who was killed never gave you the right to experiment with their life. You may argue that the person who owned the car in downloading that software made an informed decision, gave informed consent. But the rest of us haven't.

So that's a real danger. If we have a death due to a self-driving car, that gets a lot of attention. We really don't know when it will occur, how it will occur, how dramatic it will be, and how strongly the public will react to it. It's possible we could squelch a technology that could have tremendous benefits because it had so turned off the public.

Throughout the development of artificial intelligence we are going to get inflection point after inflection point where we can decide "this is a road we want to go down; this is a road we don't want to go down."

I would like to see us go down the road of less inattentive humans on the road. I wouldn't like to see us go down the road of lethal autonomous weapons. I think the prospects of having warfare of different countries' robotic systems going at each other has a prospect to take warfare totally out of human control.

STEPHANIE SY: A lot of people might have said that, though, about the atom bomb, and yet it was still deployed and it was still used, and it's hard to put the genie back in the bottle.

WENDELL WALLACH: It is very hard to put it back in the bottle. And we have no way of ever judging that bottle. If you look at the last 70 years where we have not had atomic warfare, and arguably there has been less warfare in the world, so some people argue that the atomic bomb has helped contribute to a decrease in violence.

STEPHANIE SY: To a stabilization through mutually assured destruction and all of that.

WENDELL WALLACH: But if atomic warfare occurs once in the future, that could obliterate any of the advantages we have seen over the past decades. So you are always put in a very difficult situation of trying to make that kind of judgment.

But my concern with lethal autonomous weapons is not simply the judgments they're making, but I'm concerned that artificial entities, autonomous entities in general, could dilute this foundational principle that there is a human agent, either individual or corporate, that is responsible and potentially culpable and liable for the harms that are caused by the deployment of these machines. I do not think that humanity wants to go down the road of diluting this fundamental principle that there is a responsible entity, a responsible being.

So, in a sense, lethal autonomous weapons are just the tip of a much broader iceberg. The broader iceberg or the broader issue—I have often used this example or metaphor of the self-driving car. Is technology moving into the driver's seat as the primary determinant of humanity's destiny, and do we want that?

STEPHANIE SY: And can we prevent that?

WENDELL WALLACH: And can we prevent that.

I think it's not inevitable. I think we can shape it. But I'm not a Luddite—I love a lot of these technologies, I'm fascinated by them, I want to see them implemented, I think a lot of good is going to be done. So again, it's this challenge of: can we maximize the benefits of these technologies and minimize their risks?

STEPHANIE SY: Absolutely fascinating.

Wendell Wallace, thank you so much for joining us.

WENDELL WALLACH: Thank you ever so much. This has been great fun.

You may also like

Five International Questions for the National Basic Income Debates

MAR 12, 2008 Article

Five International Questions for the National Basic Income Debates

The "national basic income" concept is energizing a growing number of political theorists and leaders. However, the "one-country-at-a-time" approach has a regrettable tendency to sideline ...

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation