L to R: David Roscoe, Bart Selman, Francesca Rossi, Stuart Russell, Wendell Wallach. CREDIT: Amanda Ghanooni
L to R: David Roscoe, Bart Selman, Francesca Rossi, Stuart Russell, Wendell Wallach. CREDIT: Amanda Ghanooni

Control and Responsible Innovation of Artificial Intelligence

Dec 7, 2018

Artificial Intelligence's potential for doing good and creating benefits is almost boundless, but equally there is a potential for doing great harm. This panel discusses the findings of a comprehensive three-year project at The Hastings Center, which encompassed safety procedures, engineering approaches, and legal and ethical oversight.

This event was in partnership with The Hastings Center.

JOEL ROSENTHAL: Good morning, and welcome to you all. I'm Joel Rosenthal, president of Carnegie Council, and I'm delighted to kick off this program this morning with our friends and associates and colleagues from The Hastings Center.

Years ago, when I first came to Carnegie Council and understanding that the role of this organization was to try to illuminate issues on ethics in public policy and ethics in public life, I ran across The Hastings Center, and it has always been a model to me and to us, at least on how to do this kind of work.

When we had the opportunity to host this program on a topic that's of some interest to us, artificial intelligence (AI) and what it will mean for human society, for governance questions, for our country, for the world, and for some of us up here who are working on these problems, I understood from our friend Wendell Wallach that there was an important project going on at The Hastings Center, and we're going to hear this morning some summary of their findings from this project.

I'm going to turn it over to David Roscoe, who represents The Hastings Center and their board of trustees, and he's going to moderate. So, David. Thank you.

DAVID ROSCOE: Thank you very much, Joel. As Joel said, as the chair of The Hastings Center Advisory Council, I'm very privileged and honored to welcome everybody here and thank Joel and his team for hosting us in this great venue, and also to welcome the many people who may be on the live webcast. I was talking with Joel earlier, and he's not quite sure how many people that is, but my hunch is that it's quite a few, given the topic, which is "Control and Responsible Innovation of Artificial Intelligence."

As Joel said, Hastings has been around for 50 years. We're a non-partisan, independent, non-profit institute located in Garrison, New York, just across from West Point Military Academy. Our mission for the 50 years since we've been founded has been to explore ethical dilemmas associated with advances in medicine, technology, and science. We do two things: We conduct and promulgate excellent research on ethical questions, rooted deliberately in hard science and hard facts; and second, we work very hard to make sure that the fruits of our research don't sit on bookshelves, and we try to make sure that what we do makes it out into the real world and has impact in the world.

Recently, in the realm of emerging technologies we've been working very hard on things in the life sciences, particularly in areas like genetic engineering, genomics, and research, and given the events of the last week, I'm sure you can imagine that our phone has been ringing off the hook. We feel that this particular set of sciences is basically a profound point in human history.

But we're also mindful that artificial intelligence, a companion technology, contains equal profound implications for society. Google's CEO at Davos said that the discovery by humans of artificial intelligence is "more profound than fire and electricity." I'm not sure—that's a little bit of an overstatement, but it gets your attention.

We agree with it in general, so we've been thinking about this project—and Wendell's going to explain it in a lot more detail—but AI is obviously rapidly transforming society and the ethical landscape within it. There are profound implications and challenges and trade-offs that are arriving on the doorsteps of society faster than the ability to process them. AI's potential for doing good and creating benefit is almost boundless, but equally there is a potential for doing great harm.

Innovation is impossible to stop. We accept that. So we are not looking for stop signs, we're looking for guardrails, and that's essentially what this project was about, and those were the questions that animated our project, and I'm going to turn it over to Wendell, who's going to give you some of the highlights.

WENDELL WALLACH: Thank you very much. I really want to thank Joel and Melissa for all the work they've put together in making this happen. They have been truly good friends to me personally, and I'm thrilled that we're able to do this event together with the Carnegie Council for Ethics in International Affairs.

The genesis of this project was at an event in Puerto Rico in 2015 that was sponsored by the Future of Life Institute. They were particularly concerned with ethical issues arising in AI, particularly the existential risk question and the future of work, the question of job displacement. So they brought together all of the key leaders in the AI community, and the four of us on this panel were all there. I was one of the, what shall I say, few people who were not really in the AI community but had been longstanding in considering the ethical and legal and policy considerations around AI and robotics.

One of the things that happened at that conference is that Elon Musk showed up, and he gave the Future of Life Institute a grant of $10 million to start funding projects on AI safety. They were particularly looking at research-based projects, but they were open to other concerns, and Peter Asaro, who is also here, got one of those grants, and this is the reporting on a grant that The Hastings Center received.

The inspiration for that grant was my sense that the AI people really did not know the people who were not explicitly in the AI community but had been considering ethical concerns around AI for many years, AI and robotics, so it was not just artificial intelligence. But there were people in machine ethics and in engineering ethics and in forensics of major accidents, test pilots, and looking at coordination between human operators and high-tech computational systems. There was a vast array of fields that were either not represented at the Future of Life meeting or did not really have a voice at that meeting.

So the thought was—and it started first with a conversation between Stuart and myself that we would put together a project where we would bring all those people together in one room. I think from the get-go this project was first and foremost a silo-busting project, a project that these leaders of these different fields would hear from each other, they would come to understand what each other was working on, and hopefully this would move into a highway of collaboration.

The first task, that first goal, has clearly been achieved. It's not only that we had our workshops over three years, but now there is really a robust international community of transdisciplinary events where social scientists and policy people and technical people—engineers and researchers developing the technologies—come together. Francesca is going to say a little bit about that I think in her comments.

I just want to mention to you what the other three themes were that seemed to anchor our conversations. One theme was explicitly we were bringing together people who were talking about a value-alignment approach and people who had been talking about a machine ethics approach to developing computational systems that were sensitive to human values and factored those into their actions. Those aren't exactly the same fields with the same goals, but there is significant overlap. I believe Stuart's going to tell us a little bit more about the value-alignment approach, but for many years I had been involved with a community of scholars that had been looking explicitly at the problem of how we can implement sensitivity to ethical considerations and computers and robots and help them to factor those into their choices and actions so that in the end they would be taking appropriate actions.

The machine ethics field was more philosophical than technical, but there were actually computer scientists who were doing projects. It looked at ethical theories and learning approaches for developing these capacities, and it was really looking very heavily at questions of near-term ethical challenges, how you could get any sensitivity at all into computational systems but that also did expand into these greater challenges of whether you could really get all of human faculties—emotions, consciousness, other faculties—that have seemed to be important into humans and whether you could actually eventually create artificial agents that were artificial moral agents, have the capability for rights and responsibilities equivalent to humans. Some people were very caught up in those futuristic concerns, but most of us were more grounded in the near-term concerns.

But the tension between the futuristic and the near term really came to the fore with this emphasis on the existential risks that artificial general intelligence would create, and that led to the second of our core themes, and that was the conversation as to whether to focus on either long-term or short-term ethical concerns for the systems themselves, whether those focuses help the other. Initially, it was really whether the focus on short-term concerns would help us build toward the long-term concerns, or whether the long-term concerns were just of a totally different order. But I think Stuart feels that some of the research on the longer-term considerations may actually help the shorter-term considerations, so perhaps he will say a little bit about that, or at least we'll get around to it in the short time we have together.

The fourth theme I'm going to talk about much more after the others have given their presentations, and that's the governance of artificial intelligence and robots and the emergence of a project toward an International Congress for the Governance of Artificial Intelligence.

But rather than say more now, let me pass this on to our other panelists.

FRANCESCA ROSSI: As Wendell said, this whole community of people, both from AI experts and also experts in other disciplines, interested in understanding the impact of AI on society started some years before but was put together I think in Puerto Rico in 2015. It was the start of this plethora of initiatives and events, and now there is almost one every—

PARTICIPANT: Week.

FRANCESCA ROSSI: —week, or maybe two every day, like today. And the fact that there are so many also brings to the governance and coordination need as well.

But really these two or three years that led to now were really very important for the AI experts to realize that they needed to understand better the impact of these technologies on the real world, other people, and society, and the fact that they couldn't do this by themselves. They needed these experts of other disciplines. They needed the people, the social scientists and the other people who have the skills, the knowledge, and the capabilities to understand how a technology can impact the life of everybody.

As Wendell said, there are many concerns. He mentioned short-term and long-term. Just to give an example, the main short-term concerns, first of all, why do people have these concerns?

Most people who are not experts in AI, they read about AI, and the way usually AI is depicted in the media is kind of a mysterious technology that can make decisions on its own and can wake up one morning and decide to do strange things to us and dangerous things to us. In some sense, it is like that, but of course not exactly in those terms with consciousness.

But there are actual concerns, even for people who know the technology and know the methodologies and techniques. They are there. For example, you may know that the technology right now is very heavily based on making sense of large amounts of data, which is something that can be very helpful because our brain is not really able to deal with huge amounts of data and understand what it means and how to find correlations and so on. But on the other hand it can also be bringing a lot of possible hidden bias in the technology because we don't really understand very well how this system is picking up information and what is hidden in that data. That's the whole point of using machines instead of our brains. That is one of the short-term concerns: Are we sure that these decisions made by these technologies are fair and are not favoring one instead of other groups and so on?

Another thing is that these techniques are also kind of opaque, so they are kind of a black box. So you have to trust this technology, otherwise, you're not going to follow whatever recommendations or decisions they make. I think a black box is always something that brings concerns to people. How do I know what has been put into this black box? Who decided to put the decision-making capabilities into the black box?

So that's another big concern, that we want the technology to be explainable and the design choices to be transparent. The whole point is to build trust in the technology, but not trust per se, but justified trust, of course. So we want the technology to be trustworthy, but at the same time we also want those who produce the technology to be trustworthy. They have to be clear about how they deal with these huge amounts of data that they collect, all the design choices they make.

In some parts of the world, there are already regulations about what the technology can do with the data, like this General Data Protection Regulation in Europe, but that's just in one part of the world, that gives certain rights to the people who are subject to the use of these technologies. But in general that's not true. The whole point is to build this trust in terms of short-term concerns.

Then the long-term concerns that Stuart may be going to elaborate more on, of course, are also important. What do we do with these technologies, so intelligent—whatever that means—and so capable, so full of skills and resources that it can make decisions that are very impactful, influential possibly in good or bad ways, how are we going to be able to control that technology? Are we going to be able to understand how to harness it and so on?

In these three years, these people started really, the AI community, which was kind of thinking about the capabilities of the technology, to advance the capabilities of technology, started thinking: Okay, these are the capabilities. I'm trying to advance them, but at the same time I want to listen to these other people who are telling me, "Be careful because there could be some issues when you deploy this technology."

I have to say that at the beginning the AI community was kind of, I don't know, was saying: "Yeah, well, okay, but somebody else will think about this. We just think about doing our own theorems and proofs and experiments and advancing the technology."

But I think in these three years the whole thing came together. And that is very important because the AI people can find technical solutions to a problem, but we need everybody else to even raise the issues or identify the issues. The issue of a possible bias in the technology was not raised from within the AI community, it was raised from other people. But then the AI community was receptive enough, and now there are entire conferences devoted to fairness or similar issues in AI.

The Hastings three workshops, initiatives, and the report that you may, I think, read and look at that Wendell put together was one of these instrumental paths to get to now. As Wendell said, it was one of these 37 projects that were funded out of this donation of Elon Musk to the Future of Life Institute.

The Future of Life Institute has been very instrumental not just in putting together this call for projects and the Puerto Rico event in 2015, but every two years it has a conference, so there was two years later also a conference in Asilomar, where we put together a set of general principles that technologies like AI should follow, and then there will be another conference in Puerto Rico in one month that is going to focus maybe more on the long-term concerns.

But I really enjoyed, for example, in the workshop at the Hastings Center having this very multidisciplinary environment where people were trying to understand—for example, I remember almost the full day we devoted to understanding the impact of the possible next generation of personal digital assistants (PDAs) like Siri, like the next generation of Siri, what that would mean for people, and how would that impact their life more than current PDAs are impacting.

Besides these workshops and this project and the other 36 projects, of course, there are many other initiatives that really leverage this multidisciplinarity and also multi-stakeholder because again you don't want to listen just to AI people and you don't want to listen just to AI experts plus the social science experts, but you also want the people who are not really scientists but are going to be affected by the technology. So you really want all the stakeholders.

For example, another initiative that I think is very unique is the Partnership on AI. It's something that was put together by six companies two years ago, the six main tech companies, you would say—IBM, Microsoft, Apple, Facebook, Google, DeepMind, and Amazon—and now it has about 80-plus partners. Less than half are companies. Everybody else is non-governmental organizations (NGOs), civil society, academic institutions, research centers, and so on, because really you want to hear everybody in understanding how to drive the technology to be really beneficial for everybody.

Another dimension besides multidisciplinarity and multi-stakeholder is multicultural because this technology is going to be very global, but values and regulations as well are going to be very different in the various parts of the world, and you need to compare, and you need to be able to collaborate and engage with everybody. China is already very influential in AI. You need to bring them into the conversation as well, even if for some choices maybe you would not follow the same path in other parts of the world, but you really need to engage with all of them.

To finish, to go back to the traffic laws analogy that you brought up, a few weeks ago a friend of mine, a philosopher, Patrick Lin, gave me a very nice analogy of the role of AI ethics work for the technology. Some people feel that thinking about this issue is going to slow down the advancement of technology, and so they are kind of reluctant to say, "Okay, let's inject a bias-detection mitigation or . . . " because "Oh, no, no, I want to deploy the technology as fast as I can." But in fact, he gave me this very nice analogy which is that, "Yes, yes, ethics is like traffic laws." He said if we didn't have traffic laws, we would drive much slower because we wouldn't know where cars are coming from, at what speed, what they can do. There would be complete freedom, but we would be so afraid that we would drive much, much slower.

So the fact that we have traffic laws allows us to actually drive much faster, and so that's why I think that really AI ethics is not a constraint but is an enabler for innovation.

DAVID ROSCOE: Terrific. Stuart.

STUART RUSSELL: Thank you, David. I like to think a little bit about history and some of the other major issues that our society has grappled with, such as nuclear weapons and climate change. When I think about how well we've done with those major challenges, I think, If only we could emulate that with artificial intelligence, we'd be in really good shape.

Actually, I think this effort that's underway now in the broad AI community is anticipating in some sense the major impacts that AI is going to have and is trying to make sure that those impacts go in the right direction. At the same time, it's also coming very late in that AI systems have already had major impacts that were completely unintended.

For example, what we might call the "click-through catastrophe," where very simple machine learning algorithms that are supposed to feed in your social media articles that are more interesting and therefore more likely to be clicked on, those simple learning algorithms had the effect of modifying people's behavior and preferences in completely unintended ways, leading to, I think, major changes in democracy and social structure. As my friend Max Tegmark likes to describe them: "That was done by a bunch of dudes chugging Red Bull with absolutely no thought for the consequences whatsoever. None." Can we do any better?

The first thing to think about is, What are we talking about? If we just think of AI as this black box, essentially a technology that landed from another planet on the Earth and is much smarter than us and somehow we need to figure out how to control it, that problem is unsolvable.

If a superintelligent system does land on Earth from another planet, we are toast, right? We are simply going to lose because that's what "superintelligent" means. Just as we lose if we try to play go against AlphaGo or AlphaZero, we try to play chess against Deep Blue or Stockfish, if you're trying to defeat a superintelligence in the real world, you lose. That's the definition.

We have to actually open up this black box and say: "What's inside? How does it work, and how can we design it so that it necessarily is beneficial to human beings?"

This question actually is an old one. Alan Turing, who is the founder of computer science, in 1950 wrote a very famous paper called "Computing Machinery and Intelligence," which was really I think the beginning of the discipline of AI. In 1951 he spoke on the radio, the BBC, and he basically said: "Look, eventually the machines are going to overtake us, and we're going to lose control." And he had no remedy for this. It was very matter-of-fact. There was no panic, but just, "This is what's going to happen." That prediction comes simply from the fact that machines will eventually exceed our decision-making capabilities in the real world.

There's not a lot we can do about that. As I think David mentioned, there's no stop sign here, and you can see why. If you work out the economic impact, just a little back-of-the-envelope calculation: What would the economic value of human-level AI be if we achieved it?

Well, if we had human-level AI, we could, for example, very quickly raise the living standards of everyone on Earth up to a kind of, let's say, Upper East Side living standard. When you calculate the net present value of that transition it's about fifteen thousand trillion dollars. That's a conservative estimate of the size of the prize that companies like Google are aiming for and countries like China are aiming for. If you think us sitting at this table and the people in this room can put a stop sign in front of fifteen thousand trillion dollars in motivation, I think that's probably optimistic.

So we have to understand what is the failure mode. Let's face it, we're investing—right now just the announced investment plans of all the major nations and countries around the world are about a trillion dollars over the next decade in AI research and development. So, if we're investing a trillion dollars in something that's going to lead to our own destruction, something is seriously wrong. So we need to go back and look: What is this thing that we're making, and how precisely could it cause the serious problems that people are afraid of?

In 1960 Norbert Wiener had just watched Arthur Samuel's checker-playing program learn to play checkers considerably better than its own creator. That was a big event. It was televised. That was really in some sense the exact progenitor of AlphaGo, which had a huge effect when it defeated Lee Sedol a couple of years ago.

What Wiener saw is that this technology is going to continue to accelerate and lead to exactly the kinds of problems that Turing talked about, and the reason would be that the machines will pursue objectives, and if those objectives are the wrong ones, then humanity will lose control because the machine—it will then be like playing a chess match for the world. The machine is pursuing some objective that we have put into it, and if we put the wrong objective in, then it will succeed in its objective and we won't.

So he likened this to "The Sorcerer's Apprentice" story, but you could also liken it to the King Midas story or Aladdin and the lamp; whenever you ask the genie for a wish, your third wish is always, "Please undue the first two wishes," because it's very hard to specify objectives correctly. And we've only been shielded from that, as Wiener pointed out, by the fact that machines up to now have been pretty stupid, and if you put in the wrong objective, as we often do, we can usually reset them because we're basically working in the lab.

But as we have seen with the click-through catastrophe, AI systems are now out there in the world having an impact on a global scale, and we haven't yet been able to press the reset button on social media. It's still happening. What can we do about that?

The answer is actually to go back to the definition of artificial intelligence, which has been since the beginning that we build machines whose actions can be expected to achieve their objectives. This was borrowed directly from the notion of human intelligence. A human is intelligent if their actions can be expected to achieve their objectives. This is what we mean—the more technical version is rationality, and that principle is not just for AI, but it's also central in economics, in statistics, and in control theory and operations research.

In all these disciplines basically what you do is you create mathematical or computational physical machinery, and then you specify an objective exogenously, and you put it into the machine, and it optimizes that on your behalf. That's how those disciplines work, and that's one of the main underpinnings of 20th-century technology.

But that way of doing things is a mistake because when machines are sufficiently capable—as I've already pointed out, if you put the wrong objective in, you can't control the result. For example, you might say, "It would be great if we could cure cancer as quickly as possible." Who here could possibly object to that? It sounds very reasonable to me. But when a machine is optimizing that objective, probably it would induce tumors in the entire world population so that it could run as many medical trials as possible in parallel in order to find a cure as quickly as possible. It's no good saying: "Oh, I'm sorry. I forgot to mention this other thing, like you're not allowed to experiment on humans, and you're not allowed to do this, and you can't do that, and you can't spend the whole world GDP on this project, and blah, blah, blah."

Whenever you give an objective to a human, they already have a whole array of concerns and understanding of other people's concerns and constraints and so on against which they interpret this new objective that they've been given, and they might even just say: "You know, that's a ridiculous objective. I'm not going to work on that problem."

When we think about how to design machines, I think we actually need to have a different view of what AI is or what AI should be, which is machines whose actions can be expected to achieve our objectives, not their objectives, so not objectives that we put into them but objectives that remain within us.

We are what matters. If cockroaches were designing AI, they can design AI to satisfy cockroach objectives. That's fine. But as long as we're doing it, it makes sense for us to have machines satisfy our objectives. But within that definition, then it becomes a more complicated problem, because now the objective is what we call a "hidden variable." It's not directly observed and known by the machine.

So you've got the machine which is pursuing an objective, namely, the satisfaction of human preferences whose definition it doesn't know, and that's a more complicated problem, but I would argue—and this is one of the things that came from our discussions at these meetings—this is the right way of thinking about what artificial intelligence is.

When you have machines that are coupled to humans in this way, it turns out that they behave very, very differently from machines that have fixed objectives. A machine that has a fixed objective will pursue it essentially at all costs. You could be jumping up and down and saying: "No, no, no! You're going to destroy the world!" But the machine knows the objective and knows that its actions are correct because they are the ones who maximize the expected value according to that objective definition.

If the machine doesn't know what the objective is, and you say, "No, no, no, don't do that," now the machine has learned something. It has learned that whatever action it was taking has some negative effect on human objectives and therefore should be avoided. In fact, the machine is quite happy to be switched off, because that presumably is done by the human to prevent some harm, and it is precisely the harm that the machine wants to avoid but doesn't know that it's about to do, and therefore it welcomes being switched off.

So you can prove mathematically that machines defined in this way actually are beneficial to human beings. That proof fails when the uncertainty about the objective goes away. So it's explicitly tied to uncertainty about the human objective, and that's the core of how we achieve safe and beneficial AI systems in the future.

So that was the basis of a new center that we set up at Berkeley, and that was again partly as a result of the workshop because some people at the workshop were connected with foundations that very kindly supported the Center for Human-Compatible Artificial Intelligence at Berkeley, which means artificial intelligence that's compatible with human existence. Since then, we've been doing research on how this approach can actually be turned into real AI systems.

As soon as you start thinking about that problem, it starts to get more complicated. The first thing you notice is that there's actually more than one human being, which is kind of a nuisance. It has some advantages, but it is a nuisance. Immediately you realize, Oh, well, you know, in some sense the problem that the AI system faces of satisfying the preferences of multiple human beings is similar to problems addressed in political science and philosophy for millennia. I think I've gone back to the fifth century BC with a gentleman by the name of Mozi in China who was maybe the first to propose the utilitarian formula that we sort of add up the preferences of everyone and we treat them all equally.

Then, I was giving a talk in Westminster Abbey a couple of weeks ago, explaining some of these ideas, and afterward the canon of Westminster Abbey, a lovely lady, came up to me and said, "You know, this utilitarianism is complete crap," because there's a whole other viewpoint in ethics and philosophy that we have to think about rights and virtues, and that just adding up preferences gives you all kinds of counterintuitive and some might argue immoral recommendations and so on.

The discussions in the workshops often involved not quite such colorful language but ethicists and philosophers saying to the AI people: "You know, you should read this. You should read that. There's lots more to this question than you might imagine." I would say the vast majority of AI researchers, if they ever thought about it at all, would just say: "Yes, add it up. Add up all the preferences. That seems perfectly straightforward."

But in fact there are lots and lots of complications. One of my favorite is the "utility monster," which is the person whose utility scale is vastly stretched out compared to the utility scale of ordinary people. The utility monster is overjoyed by a cup of tea and is absolutely plunged into despair by a few sprinkles of rain and so on, and that person ends up sucking up all the resources of the world if you follow a strict utilitarian formula. You might say, "Well, that's very rare," but I have children like that, so it's not very rare.

But also, when you're faced with a robot who will sort of grant all your wishes, everyone has an incentive to behave as if they are a utility monster, even if they aren't. So you get strategic interactions among the humans competing for the attention of the robot, and how do you get around that problem?

So, lots of interesting questions. Also, humans, as we know, are far from rational, so we're trying to understand the preferences of humans, which are displayed through behavior but in a very imperfect way. Our behavior is ultimately, one might argue, generated from our underlying preferences but not in a rational way. We are very short-sighted, emotional, and computationally limited.

When a machine sees Lee Sedol play a losing move in a game of go, it should not assume that Lee Sedol wants to lose. It should assume, no, he wants to win, but his moves are being produced by a computationally limited decision-making algorithm. So understanding the nature of human computational limitations, the nature of human decision making, even the nature of the human emotional system, all are part of this task of having machines understand human preferences so that they can satisfy them.

These are some of the things that we're now working on at the Center, CHI as we call it. We're also thinking about long-term impacts on employment. We're thinking about killer robots, which my friend Peter Asaro here is also very interested in. And we're trying to engage in this public debate which is going on in the media.

We have Mark Zuckerberg saying we don't need to worry; we have Elon Musk saying, "You have no idea what you're talking about, Mark Zuckerberg," and so on and so on. I would not say it's the highest-quality debate, but we're hoping that we can actually make some gains in public understanding in the process.

Thank you.

DAVID ROSCOE: Terrific. Next up is Bart. Bart is the incoming president-elect of the largest association, the American Association for Artificial Intelligence (AAAI), and in that capacity has some very interesting remarks for us.

BART SELLMAN: Thank you. It's great to be here. Let me start off by saying for artificial intelligence research this is a very exciting time, even though for outsiders it sometimes might look like somewhat of a worrisome development with smarter AI systems at our doorstep.

I just want to say that the AI community has traditionally been a very academic community. Basically, our work did not have direct real-world impact, so we actually have not been thinking much about the issues of what the consequences of these smart systems could be.

However, in the last few years—and the Hastings workshop was a big factor here—we've seen a dramatic shift in our field. AI researchers are now very aware of these issues. We are very aware of issues of fairness, transparency, trustworthiness of AI systems, and we are exploring many different research projects among the various research groups in the nation but actually worldwide. I think it's a positive development, this attention to the field, and I'm sort of optimistic that we'll find good ways of dealing with these issues.

I think one distinction to make first is that I don't think we're faced with one superintelligence that will soon arrive. That's not the way I think things will move forward. Instead, what we'll see is specialized systems in various domains, like medical domains, engineering domains, self-driving cars, various subfields of human expertise. Intelligent machines will first supplement us and may even reach human-level performance in those specialized domains. So when we develop these systems, we're not immediately faced with the overall problem of trustworthy superintelligence. We're faced with, "Can we make these systems work in specialized fields?" That's I think the first development we'll see.

If you look at something like self-driving cars, which initially for the general public seems somewhat worrisome already—there's no driver, and what do you when you're about to get into an accident?—if you look at the development now and the way it's being treated in the press and the media, I think it's a positive kind of development. People are becoming aware that these cars can be designed in a safe way, that the companies that are working on these cars put safety as a first criteria, and that overall these kinds of smart machines—they will be self-driving cars—will actually be very beneficial to us, and they will reduce the overall traffic fatalities. Estimates are that a 90-percent reduction is feasible.

I think people will start accepting these technologies as a positive thing in society. A self-driving car, you might say, "Well, it's a limited example because it's a very well-described domain," but it will be one of the first demonstrations of how artificial intelligence techniques can be a usually positive factor.

Stuart mentions various issues of how these systems optimize utility functions. In practice, you have many conflicting interests with conflicting criteria to deal with. But again, I work myself in the field called computational sustainability, where we work with sustainability scientists, with social scientists, and with economists on sustainability challenges and use AI techniques to tackle those challenges.

What we experience there is, yes, there are conflicting criteria, conflicting objectives sometimes. Local economic objectives for development contradict overall sustainability goals. What the systems that we develop do there is give you those trade-offs. We've developed systems that allow you to explore the various trade-offs in objectives and basically make more informed decisions. There is an example of how AI systems will lead policymakers to making better overall decisions. So again, in that domain, AI will be a positive factor.

That's how I view the near-term development more for AI, and I'm sort of hopeful that our research community in collaboration with these other fields will be able to manage these risks quite well. There are, of course, issues that are probably more challenging, for example, autonomous weapons and military development, so there are certainly areas of AI where I think the challenges are bigger, but their immediate impact on our lives will be largely positive.

I want to give one more example of some of these challenges. If you look at something like computer chess, which has a long history in AI, starting over 50 years ago, in 1997 I guess Deep Blue became the first world champion beating the human world champion and became the world's best chess-playing machine, basically. It was to some extent maybe a bit of a disappointment for AI research because the thing that Deep Blue does is a straightforward exploration of the future. It checks all possible moves ahead, up to 20-30 moves ahead, very exhaustively.

Interestingly, DeepMind came up with a new approach called AlphaZero just last year where a computer chess program played against itself and trained a deep neural network how to play chess. That program actually outperforms Deep Blue. It's better than Deep Blue. However, I would say it's more mysterious and is less trustworthy in that sense. We don't know whether there could be some hidden strategy with which it could beat AlphaZero very easily. We just don't know what it is. Versus Deep Blue, which is not quite as good, but because of its exhaustive search it's very unlikely that there is a simple way of beating Deep Blue. I actually believe it doesn't exist.

So it's an example of two AI approaches to the same problem—this kind of very limited problem, computer chess—but two AI approaches where one will be more trustworthy than the other. You can build more advanced ones, stronger, but it's harder to trust.

So we are going to have to deal with these kinds of trade-offs in designing our systems, and I'm hopeful that we will aim for the more trustworthy approaches. We may give up a little of performance in doing so, but again I think it's a positive development.

DAVID ROSCOE: Terrific. Before we get to some questions, Wendell, did you want to speak a little bit about the fourth objective of our project, which was this concept of governance?

WENDELL WALLACH: Great. Thank you, David.

Those of you who are here have in front of you an executive summary for our report, and there are also copies of the report. We don't have them for everybody, but for those of you who are particularly interested in in-depth discussions and getting beyond what we could cover on this panel, there are some copies of the report here. For those of you who are listening in on streaming video or tune in later on, hopefully we will have links to the full report for you available.

In our report we did avoid certain subjects that were actually largely tabled, even at our workshops, and those were subjects such as technological unemployment, lethal autonomous weapons, even transparency and bias we talked about early, but there are so many other reports that have come out from other groups that we didn't feel it was necessary to belabor you with those and that we would focus more on the recommendations that were most central to this project.

In addition to these recommendations around transdisciplinary communities and research on value alignment and machine ethics, one of the topics that we had very rich discussions around was the governance of AI, and I am using the "governance" word, not the "government" word, and that is because there has been a recognition by many of us who have been in the ethics and governance and policy of emerging technologies over the past decade that there is a total mismatch between existing forms of governmental oversight and the speed at which these technologies are developing and the breadth of applications.

As most of you are gathering by now, AI is not a sector-specific technology. In that respect it is closer to electricity than it is to automobiles, which were really about transportation. No, AI is going to touch perhaps every sector and already is touching every sector of human life, and how are we going to monitor that, how are we going to be assured that there are not gaps in the kind of oversight we put in place?

Gary Marchant, who is not here but was one of the other co-chairs on this project, is the director for the Center for Law, Science and Innovation at the Sandra Day O'Connor School of Law at Arizona State University. He and I had been dealing with this problem long before this project started, and we decided one day rather than belaboring the lack of effectiveness of the governance we had, what would we put in place if we had our druthers? And out of that came a model that we called "governance coordinating committees." The basic idea here was to put in place some kind of governance structure that was much more agile and much more comprehensive and was truly multi-stakeholder, so it wasn't just dominated by either those who were elected or by even the leading corporations but had broader input from other segments of the society.

We developed this model, and this model played on themes that were already out there, so the World Economic Forum, at the United Nations, here at many discussions that have gone on at the Carnegie Council for Ethics in International Affairs. People have been talking about the need for agile governance, something that is much more adaptive and responsive, and various ideas have come to the fore, but so far nothing really effective has been implemented.

Our idea was that emerging technologies, largely because some of them are not belabored by existing laws and regulation, actually afford us a pilot project to begin working with more agile structures for governance. Particularly, Gary and I had recommended that we look at artificial intelligence and synthetic biology and gene editing more broadly as two themes for which that governance is very important and does not exist yet.

So we brought this to our discussions both in workshops one and three. We had some really rigorous discussion that were enriched by the plethora of genius that we had in the room, and what emerged out of the last workshop was that we should move expeditiously toward convening an International Congress for the Governance of AI. So that is a project that has moved forward.

First, let me say a little bit about the model, and then I'm going to quickly talk to you about the congress idea. The model was that perhaps we put in place some new mechanism for governance that would be monitoring the development of the field and engaged in loose coordination of the many people weighing in but particularly looking for gaps, areas that were not being attended to by the many other bodies, for example, that are now jumping into the AI space and that required some attention. In looking for mechanisms for that attention, they would look very broadly, starting with what can be solved technologically but also looking at what might be solved through corporate oversight or technology review boards within corporations or technology officers or industry oversight, but not just that we give lip service to self-governance within corporations, but how could we hold the corporations' feet to the fire?

Then also to look at what sometimes is referred to as "soft" law or soft governance. That means standards, laboratory practices and procedures, insurance policies, a vast plethora of mechanisms that often come into play long before we get to laws and regulations.

Some of those standards, of course, do rise to the level of needing to be hard law and regulations because of things that require enforcement, but we're looking at all of those different areas that where turning only to law and regulations where it becomes most essential for enforcement, and if possible we even have a new role for our governmental leaders that they put in place ways of enforcing standards which have now been broadly accepted. We do actually already see that, for example, with the Federal Trade Commission in the United States. It will prosecute almost anything that it sees as an egregious belying of public trust within the areas of trade.

That gives you a sense of what we're talking about in the mechanism, but again we decided that it wasn't enough to start either with the United States or with any other countries. We need to simultaneously have an international project, and in that regard we convened a meeting here in New York City on September 26. It was co-chaired by something called Building Global Infrastructure for the Agile Governance of Artificial Intelligence (BGI for AI), which is really me and Stuart and Francesca and David and a bunch of other advisors we have brought into that worldwide, but it was also partnered with the UN Global Pulse, and it was partnered with The World Technology Network, Jim Clark, who has been creating a network out of people in technologies and giving awards every year to leaders in that.

We brought together 70 of the leaders and representatives of many of the major leading bodies, including the Institute of Electrical and Electronic Engineers (IEEE). The United Nations had representation from many different fronts at that meeting. We had OpenAI, which is one of the other leading laboratories represented there.

That body enthusiastically endorsed that we should move ahead with this International Congress for the Governance of Artificial Intelligence, and that we should do that if possible by November of 2019. So that is actually an ongoing project, something we're focusing on. It was one of the direct outgrowths of this series of workshops. For those of you who might be interested in that, it's discussed in much greater depth within our report.

Questions

QUESTION: Good morning. My name's Jordan. I'm a graduate student at Northeastern University, studying urban informatics, working toward a Ph.D. in network science.

Actually, I was at the Envision Conference at Princeton this weekend where I met Dr. Wendell Wallace. Thank you for inviting me to this conference.

I guess my question is—for the objective I was thinking about: What if the objective for AI was to solve global poverty and maybe building a bottom-up model where we develop a justified, trusted model for universal basic income and having people innovate in developing emerging economies so that we can get it right or develop a proof of concept that they then can bring to developed economies at scale? I was just curious about your thoughts of focusing maybe on an objective overall for AI to solve maybe global poverty.

The second question would be how to get involved with the International Congress for the Governance of AI because I would love to help.

STUART RUSSELL: The goal of solving global poverty is a great part, I think. If you did add up all the preferences of all the people on Earth—there are about 2 billion people who I would say have a reasonable standard of living, and then the rest I think have anything from acceptable down to an utterly miserable standard of living.

I was just looking at some stats. There are dozens of countries where the health care expenditure per person is less than $20 a year. When we in AI think about, Oh, we can use AI systems to look at x-rays, well, an x-ray costs $720 in the United States. That's 36 people's health care for an entire year, to have one x-ray. The differences are so enormous.

I don't know what the answer is. One thing is clear: It's not just a matter of money because if everyone in these least-developed countries suddenly had access to a universal basic income, it wouldn't help because they don't have the capacity to provide the goods and services which people could buy with this basic income. So it's a very complicated bootstrapping process involving education and development of infrastructure and development of institutions as well as having money. Money is a good thing to have if you want this to work as well.

I think AI can help with this, but at least for the foreseeable future it's got to be on the human beings to organize and solve those problems and to deploy the resources that AI can bring. For example, we can I think perhaps in five to ten years' time have AI systems that provide extremely high-quality education in the native language of whatever country you wish on any subject under the sun.

But how do you then use that? What do you teach people, and how do you take the result, which is educated, trained people, and deploy them in your economy? If you don't have a functioning market, then it's no good being an extremely good commodity trader if there's no market for you to trade in, and so on. So there are a lot of steps that have to be taken.

FRANCESCA ROSSI: Of course, these are difficult problems to understand how a technology like AI can help to solve these big societal and world-level problems. But the good thing is that there are initiatives that tackle exactly that point, how to make sure that we understand how to use AI, for example, to achieve or move toward the achievement of the 17 Sustainable Development Goals of the United Nations, among which there is give health care for everybody, remove poverty, and so on.

So every year in Geneva the UN agencies get together in a conference called AI for Good, which has exactly that goal, to put together AI people on one side, UN agencies on the other side, so the people who know the problems to be solved and the people who can find the right solutions. So that's already also another initiative that started three years ago or two years ago. There have already been two conferences every May. I think that's very important to participate because really it can help us understand how to solve those big problems.

WENDELL WALLACH: Francesca was also at another conference I know recently, AI for People, and the week before last I was in Mumbai at a conference, AI for Everyone. So these goals are out there. There is a broad concern of whether AI can be utilized to solve some of our greatest problems in health care and poverty and so forth.

But I will want to say one thing that I said both in Mumbai and in the International Telecommunication Union (ITU) Conference at the United Nations this year. This emphasis on AI for everyone also belies a concern that AI may not be for everyone and that AI may also contribute toward extreme inequality. If it's exacerbating inequalities, it may actually undermine the realization of the Sustainable Development Goals of the United Nations and otherwise.

The concern here is that when we have these conferences they are not just a patchwork quilt that is put as an overlay to give AI a mask that it will do some things for humanity as a whole, but that we actually make sure that that is the case and that we in some sense or another deal with the tremendous inequality, the distribution problem that we will have, particularly if the IT oligopoly is also the AI oligopoly, then whether you talk about universal basic income or a higher standard of living for everyone or $15.7 trillion growth in world GDP by 2030 as PricewaterhouseCoopers has said, that those resources are truly used to raise the lot of humanity as a whole.

QUESTION: Thank you for a very interesting discussion. My name is James Starkman. I'm a regular at the Carnegie Council.

Won't the bottom line be that corporations and governments will want to have the edge on their competitors? I think the conferences and setting a set of standards is a very good thing, but won't it in most cases be overruled by the considerations I've just mentioned?

BART SELLMAN: Well, it actually relates a little bit to the previous question and to Wendell's comment. There is some pushback. I would say right now companies and governments are a major force, of course, in the development of AI, especially Google and Facebook, the big IT companies.

But there is a movement back in the research world, and the field I'm partly involved in is computational sustainability, which was set up to counter that and to basically say, "AI can do good if the researchers and the research community and development community make a special effort to focus on these other challenges that we face."

What we're seeing is that that's being funded now by foundations and other organizations, NGOs, that are very interested in these humanitarian problems, and pushing back on sort of the pure commercial side. I'm somewhat optimistic that we're actually seeing the emergence of sort of a pushback funded by other organizations.

Related to the previous point, poverty mapping is, for example, something that can be done with AI. It's done from satellite images to map poverty in Africa, and that's being funded by the Rockefeller Foundation, which made an enormous investment in a start-up company that will do worldwide poverty mapping and analysis of societal problems using Rockefeller Foundation money. So I'm somewhat optimistic that we can push back, but it is a very important issue.

STUART RUSSELL: A couple of points: One, I was at a meeting in China recently where the top 50 Chinese AI companies—a ranking was announced. At the press conference, a very brave journalist, I guess, asked, "Who are the main customers for these AI companies?" The answer is the Chinese state security apparatus. That seems to be what AI is for in China. That's one point. There are clearly forces that don't have the benefit of everyone at heart.

Then the other point is that if we do succeed in achieving human-level AI, in a sense at that point the competition for resources, for access to the wherewithal of life, which has basically governed history since the beginning, goes away. We no longer need to compete when basically everyone can have everything.

So the way I put it—I give a lot of talks in China—I say: "Look, you can have a dominant share of a nonexistent pie or you can have a fair share of an infinite pie. Which would you rather have?" I think if countries can understand that, then they should see that cooperation reaching that infinite pie quickly is the best solution. Whether they will understand that, I don't know.

When you think about it, the one thing that we are going to still be competing on where there isn't an infinite pie is land, space for having a high quality of life, and that's unequally distributed and finite and fixed. I'm guessing that's going to be the primary source of international competition going forward.

FRANCESCA ROSSI: I wanted to say something here, since although I lived all my life in academia until three years ago, three years ago I joined IBM, so it's a big corporation that has a long history in AI, and I think that more and more companies, especially big corporations, multinational corporations, a corporation like IBM feels the responsibility to understand how to address, identify, and bring to the surface the issues within the technology and to find solutions in research but also in how to inject this research solution into our products and platforms and solutions, whatever we deliver to our clients, and of course to have a kind of holistic approach to these issues.

For example, we also have a program for AI for social good which applies to the opioid crisis, the Zika problem, with data from various foundations. We have really a holistic approach where we try to understand the issues, find research solutions, understand how to apply the technology to good, beneficial effect, and then to deliver to our clients.

Of course, this holistic approach cannot be expected from small start-ups that maybe don't have the resources to spend on this, but from big corporations, yes. So we should expect that from everybody.

QUESTION: Good morning. Mark Duncan from the United Nations Office for Disarmament Affairs.

As you may know or may not, the secretary-general released his disarmament agenda this year, and a big section of that was appropriately calling for disarmament for future generations, including emerging means and new technologies. So I find David's phrase regarding "guardrails and not stop signs" quite interesting, as I think the UN's preference would probably be stop signs, seatbelts, maybe a moat on either side of the road.

I know the report did not address lethal autonomous weapon systems, and I do appreciate that that has been covered ad infinitum elsewhere. But regarding the soft governance sort of structure proposed based on the coordinating committees and an international mechanism, how would this mechanism then, or soft approaches in general, ensure adherence that in these new technologies to existing bodies of public international law, whether that be the law of armed conflict or humanitarian law, but also to nonquantifiable norms and values and concepts like responsibility to protect, state sovereignty equality, how do you ensure that these are still effective and accountable when AI might be taking increasingly larger responsibilities?

WENDELL WALLACH: That is a great question, and it's obviously not easily solvable. I'm applauding the secretary-general because I think more broadly the secretary-general, having an engineering background, does realize the impact of these emerging technologies.

Some of you may not be aware, but it has also convened a higher-level panel which is considering some of these concerns broadly. Some of the leaders of that and other representatives within the United Nations did participate in our meeting in September.

But I don't think that all of these questions are going to be easily answered, and I can't sit here—I would be happy to sit down with you and go into greater depth, but I think we are at this stage talking about a broader structure, but we need to bring all the stakeholders together, representative stakeholders, to begin thinking through what kinds of mechanisms could be put in place.

It's going to be very difficult, even if we create a new international body, for that to have much direct enforcement, at least initially. But given how many other governments and nongovernmental bodies that are moving into the space with standards and soft law, some of that is actually going to take hold, particularly when you see that those are bodies like the IEEE, the standard-setting organization for electronics and the electrical industry. Some of those standards will take hold. The difficulty becomes when you go beyond the technical standards and you start talking about the ethical standards and the ethical standards when there are differences and concerns.

But I have been in the world of ethics my whole life. When people talk about ethical concerns they always emphasize where there's disagreement. The fact is there is much more consensus than we sometimes want to acknowledge on some of the broader principles.

So let's start there, and at least let's convene the stakeholders to wrestle with some of these less-tractable issues. Again it's not going to be easy.

STUART RUSSELL: I just want to say something very short. I have been suggesting that within AAAI we have our first policy—AAAI has no policies on any topic whatsoever. It's the Association for the Advancement of AI, which is the main international professional society. So I'm proposing that we have a simple rule, that you can't make machines that can decide to kill humans. This is one of our ethical standards that we should actually enshrine in a principle of policy for the society. I think that would be a good start.

The biologists did something similar in the late 1960s and early 1970s. They convinced President Johnson and then President Nixon to abandon the United States' very large biological weapons research program. So I think it's really the responsibility of the AI community to do something similar.

DAVID ROSCOE: So maybe a few stop signs, but very carefully selected.

QUESTION: My name is John Torpey. I'm director of the Ralph Bunche Institute for International Studies at the CUNY Graduate Center.

My question basically goes back to something you said. You compared AI to electricity as opposed to automobiles in its kind of general significance for our lives.

I guess the question is, does that comparison give us any kind of guideposts or any kind of sense of how the governance and regulation of this technology—I mean, electricity was world-changing and world-shaping but sort of dumb in a certain sense, in the sense that it gets delivered at a certain amount, and that's relatively simple, I suspect, to regulate, whereas this is a technology where there's constant learning, constant application to new fields, etc. So is the comparison useful in that regard?

WENDELL WALLACH: I don't want us to drive this metaphor into the ground and make it more significant than the point I was trying to make there. There are so many differences in the applications of AI that I don't think we should expect that there's going to be a set of standards. But that said, there will be standards that will cut horizontally across many industries. Issues like transparency and algorithmic bias and who will be responsible if the system fails or who is responsible for even deploying the system if it does pose some dangers—these are going to cut across so many different industries, and there may be standards that are horizontal in that regard.

But then there are also vertical considerations such as in AI in health care where the data concerns and the concerns around privacy, for example, are very different than they are in other areas of life, but we're going to have new, tortured problems such as if we can conglomerate everybody's data, we may be actually able to make much more headway in solving some of the healthcare issues, but if we conglomerate everybody's data, are we really violating human rights in realms that we perhaps do not want to transgress?

It should not be looked at as a simple one-to-one comparison, but there certainly are many issues that will cut across many sectors.

QUESTION: Thank you. I'm Sakiko Fukuda-Parr, professor at The New School.

In this question of the evolving governance of AI, it seems that this is sort of evolving within the AI community. How do you make sure that in that evolution you hear the voices of not just theoretical ethicists but human rights advocates or people of the Global South, and particularly those who are concerned about the unequal access? Because those who write the rules could be dominated by corporations who have the profit motive at the forefront, large governments, large country, powerful governments, etc.

DAVID ROSCOE: Bingo.

WENDELL WALLACH: That is certainly a concern that I and so many others in this space share. Again, as you rightly know, this is not an easy problem. It's easy to talk about having NGO representatives for the rest of the world, but we have a tendency to again be top-down. These still tend to be very elitist approaches.

So particularly when we're now putting together this international congress for the governance of AI, we need help from any of you in this room on who needs to be at the table and who can truly represent other voices and might be there otherwise.

Again, this is at the forefront of my mind. If we can't convene a congress, or we only convene a congress which is again dominated by Europe and America or a congress that is only dominated by the powerful corporations or even the sometimes patronizing NGOs, we have not solved this problem.

FRANCESCA ROSSI: There are initiatives that give a partial solution to that problem at different levels, like for example, as I mentioned the Partnership on AI puts together corporations or even start-ups, companies delivering AI, with everybody else, not the policymakers, but everybody else, yes, the NGOs, the civil society, academia. There is the United Nations Children's Fund (UNICEF) and other UN agencies. There is Human Rights Watch, so there are all these other entities. But it doesn't have policymakers, which of course is a big ingredient in this field.

There are other initiatives that try to put together the policymakers with all the stakeholders. For example, in Europe the European Commission, which is the body that governs Europe at large, even if Europe then has its own different states that have some form of independence, but for some things they're not completely independent; they have to respect the European laws. So they put together what they call a "high-level" group on AI, that is, a group of 52 people that includes AI experts, includes consumer rights associations, includes ethicists, includes all this, so in that space but is confined to Europe for understanding the European framework for AI and what the impact can be there, there are all the stakeholders including for policy making.

So there are partial solutions, and I think that a role that these initiatives that Wendell is putting together is really to find one way to coordinate all of that and to get a way, maybe a higher-level way, to have all the ingredients.

WENDELL WALLACH: Just a brief follow-up comment. I guess an important role that I see is sort of a more bottom-up approach where part of how I see AAAI, the artificial intelligence organization, move forward is to have a more informed interaction with journalists and science writers basically to get the public more involved in the discussion about the future of AI and impact on life because that I think is one way to get more voices to the table, if things come from the bottom up more than from the top down.

DAVID ROSCOE: As a layman, let me say I completely agree with that.

QUESTION: Thank you. Sorry, I'm not sure this was worth the wait. Giles Alston, Oxford Analytica. Thank you for your presentation.

You've given us two images, I think. One is of a group of scientists working on AI in a relatively closed environment, and the one we have today, where we're all talking about the ethical consequences. My interest is how we moved from the first to the second.

Was it a case of the experts opening the door of the relatively sealed room and saying, "Is there anybody out there who can help us think through this?" Or was it a case of people thumping on the door from the outside saying, "You need to talk to us"? I'm sure the answer is both, but I would love to get your sense of the relative dimension. Thank you.

STUART RUSSELL: It's a great question. I hadn't really thought about that before. I would say it was the former. I think it was the—I'm seeing this from the point of view of someone on the inside—we certainly realized that there were a lot of problems, and it's basically a kind of growing pains of the field.

As I said, we've spent most of our history playing with toys in the lab and not having any impact on the real world. All of a sudden, our products are actually useful, and they're out there in the real world, they're having an impact on a global scale, and we, unlike the medical profession, the civil engineering profession, the nuclear engineers, haven't developed the regulatory and professional standards and so on that we need. So we are suddenly aware of this power that we have, and we want to know how to use it wisely.

But I will also say people like Wendell, who is not a technical AI researcher, has been, I think, kind of like the little conscience standing on your shoulder—

WENDELL WALLACH: Banging on the door.

STUART RUSSELL: —telling us to think about these problems as well. There are definitely things happening in both directions.

But I would think the—on questions like autonomous weapons, for example, I would say the AI community was completely asleep at the wheel.

QUESTION: Anthony Faillace.

Is this change in technology so profound that it's going to require a wholesale change in the way we handle intellectual property rights? And if so, what will it look like, because obviously there's a lot of discussion about the "first-mover" advantage, copyright, oligopoly, quasi-monopoly approach that we have now. Does this change the game completely?

BART SELLMAN: I'm not sure it's a dramatic change, but we definitely see that—and this is what academia struggles with in part—some of the most exciting developments in AI are right now done within companies. When DeepMind developed AlphaGo or various other recent breakthroughs, that technology actually is not available. The principles are available, but the actual AlphaGo network, the deep net that they have trained, is not available to academics or to outsiders.

So I think there is a new bit of a tension there in terms of what companies can keep to themselves and prefer to keep to themselves and the public interest, even the interests of academic researchers and others, to examine those technologies. It's not directly maybe intellectual property, but there are issues there that are somewhat worrisome for the academic community, yes.

STUART RUSSELL: I would say the AI community within industry has been surprisingly open in publishing most of their ideas. So deep Q-network (DQN) was the big thing that DeepMind did before AlphaGo is described in enough detail that you can reproduce it, and some of my colleagues did. Microsoft, IBM, and Google continue to publish in academic conferences, so I would say compared to a lot of other industries it's relatively open.

I think we're going to see serious complications when a lot of intellectual property is produced by AI systems. Who's going to be the inventor? Who's going to own that? Is it going to be the company that's using it to produce inventions or the company that created the software, and so how are they going to license it for use in invention and so on? So lots of interesting questions.

FRANCESCA ROSSI: That's something that is really already happening. You may know that—I don't know exactly where, but a few weeks ago a painting produced by AI has been sold, and there is original music produced by AI systems in a way that the programmer could not anticipate, so the copyright issue—

DAVID ROSCOE: Is a big one, yes.

FRANCESCA ROSSI: —and intellectual property issues.

DAVID ROSCOE: Wendell, a final comment.

WENDELL WALLACH: So whether AI will change the ownership and copyright and so forth is one question, but there's another question about whether it should, and here's where the comparison to electricity comes in. To what extent should we be treating this technology as a public good, as a utility, rather than something that is simply owned by those who happen to have control over data or control over specific technology?

If we treat it as owned by those corporations, then we will just exacerbate inequality. So, if AI for good is truly a goal, we do need to think through much more creative ways of managing this truly empowering technology.

DAVID ROSCOE: With that final comment, I'd like to thank this audience, both here and virtual, for a very stimulating set of questions. Thank you very much.

You may also like

APR 5, 2022 Podcast

AI & Collective Sense-Making Processes, with Katherine Milligan

In this "Artificial Intelligence & Equality" podcast Senior Fellow Anja Kaspersen and Katherine Milligan, director of the Collective Change Lab, explore what we can learn from ...

MAR 29, 2022 Podcast

Can You Code Empathy? with Pascale Fung

In this riveting and wide-ranging conversation, Senior Fellow Anja Kaspersen is joined by HKUST's Professor Pascale Fung to discuss the symbiotic relationship between science fiction ...

MAY 26, 2021 Podcast

Creative Reflections on the History & Role of AI Ethics, with Wendell Wallach

How is the new global digital economy taking form? What are the trade-offs? Who are the stakeholders? How do we build “participatory intelligence”? In this ...

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation