Agile Global Governance, Artificial Intelligence, & Public Health, with Wendell Wallach

May 22, 2020

The rapid development of emerging technologies like AI signaled a new inflection point in human history, accompanied by calls for agile international governance. With the onslaught of the COVID-19 pandemic however, there is a new focal point in the call for ethical governance. Senior Fellow Wendell Wallach discusses his work on these issues in this interactive webinar with Carnegie Council President Joel Rosenthal.

JOEL ROSENTHAL: Good afternoon, and welcome to the Carnegie Council lunchtime webinar series. Wendell, thanks for joining us from your home in Connecticut.

WENDELL WALLACH: Thank you, Joel.

JOEL ROSENTHAL: Thanks everybody for tuning in. Today's topic is "Agile Global Governance: International Cooperation, Artificial Intelligence, and Public Health," and our guest is our good friend, Carnegie Council Senior Fellow Wendell Wallach.

Wendell is a consultant, ethicist, and scholar at Yale University's Interdisciplinary Center for Bioethics. He is also a senior advisor to The Hastings Center, a fellow at the Institute for Ethics and Emerging Technologies, and a fellow at the Center for Law, Science, and Innovation at Arizona State University. Additionally, Wendell is lead organizer of the first International Congress on the Governance of Artificial Intelligence, a project which I hope we can discuss in some detail in the next hour.

I first got to know Wendell when he came to Carnegie Council to discuss his first book Moral Machines: Teaching Robots Right from Wrong, and then his next book A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. What started as mutual interest in the ethics and governance of emerging technologies has become a long friendship. Wendell, I hope you'll consider this discussion as just another step in what I hope is still a long journey ahead.

Today we want to talk about the tech response to the pandemic, specifically the use of big data and artificial intelligence (AI) as we move from the initial acute lockdown phase to whatever it is that is coming next. Among the questions are: How have international institutions used big data and AI in their responses to the pandemic so far? Has this new stress to the system highlighted the need for more agile global governance when it comes to emerging technology? Or has international governance of AI and other emerging fields of research been set back by the pandemic?

Before turning to Wendell, just a word about our format. We have asked Wendell to kick things off with a short presentation. After that, Wendell and I will have a dialogue, but the back half of the program will be interactive, so we encourage you to use the chat function to pose questions, and when we get to the second half-hour our moderator Alex Woodson will read questions on your behalf.

So over to you, Wendell.

WENDELL WALLACH: Thank you ever so much, Joel. I'm thrilled to be with you today, and I'm honored by all of you who have overcome your Zoom fatigue and tuned in to join us today. I know that sometimes returning to your computer and watching one more discussion can be a bit daunting in these times. Of course, the good news is that we can attend meetings in sweats and comfortable clothing, and if you're bored or sleepy, you can at least turn off your camera and nobody will know.

My comments are going to be more or less on the order of the good news and bad news that has emerged from the present pandemic and how that has affected our ability to both deploy and provide some degree of oversight for emerging technologies.

This forced shutdown has given many of us a significant opportunity to reflect in ways that perhaps we have lost over the preceding months. Just a few weeks before the shutdown I was on half a dozen flights to various corners of the world in preparation for the International Congress for the Governance of AI, and with the shutdown I have not traveled in two-and-a-half months.

The first stage was really reflecting on the total uncertainty, the uncertainty about the coronavirus itself. We still don't understand a great deal about it, and it seems to be altering its expression in various ways, but also just to assimilate what was taking place. The world has shifted, and our minds and bodies need to take in a great deal to just get a sense of where we are and what the opportunities are that are opening up and those that are disappearing.

That window for reflection seems to be slowly closing as society tries to reopen. My greatest fear—and it's not just set off by this pandemic, but it has been a concern in regard to emerging technologies over the past decade—is that we're going to get so overwhelmed in reacting to crises that we lose the capacity to shape humanity's future, and this is being exacerbated by this present health and economic crisis. Not only are we being forced—in addition to our previous concerns—to manage the pandemic and restart the economy, but we are going to have to address the needs of the hundreds of millions who have had their lives totally disrupted by this tragedy.

The United Nations World Food Programme has estimated that the number of people who are "marching toward starvation" has doubled from 130 million before the pandemic to roughly 260 million now. That will be actual starvation if we don't find effective means to mitigate it.

UN Secretary-General Guterres, in a speech a few months back, stated what he thought were the four "modern horsemen of the apocalypse." They were climate change, geopolitical tensions, mistrust, and the dark side of technology. I'm going to talk mainly about the latter two, but let me say a few words about climate change and geopolitical tensions.

The good news is that we are suddenly having all of these wonderful videos of animals in city streets and pictures again of the Himalayas peeking out over cities in Asia such as Kathmandu, cities that have been so ensconced in pollution that you only periodically if ever get to see those mountains.

The bad news with climate change is that even with this major shutdown, this timeout, we are not flattening the climate change curve; it continues to go up, even if at a slower pace. So even if we could establish the technological usage, the spewing of carbon into the atmosphere at the level it is today, that would still contribute to continuing climate alterations.

Geopolitical tensions. Normally a pandemic can increase the degree of international cooperation, and we have seen that. We have seen countries all over the world establish measures that limit the transmission of the virus.

But we are also in a period of heightened tension. This is not being helped at all by the fact that this is an election year in a deeply divided United States.

Traditionally you would turn to the World Health Organization and the Centers for Disease Control (CDC) as major factors in mitigating a pandemic or even a simpler epidemic, but on the international stage the CDC has had a rather minor role, and as most of you are aware there has been a great deal of politicization over the World Health Organization's efforts. The World Health Organization is a body—like many of our international bodies—in need of serious reform, but reforms are pretty difficult to direct our attention to in the middle of a pandemic, and really they should not be a primary concern given how much the World Health Organization does to ameliorate the various health crises that we are dealing with.

Mistrust I think is a more serious issue and one in which technological oversight is somewhat complicit with. We are in many countries of the world flirting with what Jürgen Habermas called a "delegitimation crisis." That's a crisis when a significant portion of the citizenry loses any faith in their institutions to meet their needs.

Crises underscore existing trends while changing the course of history, and one of the existing trends has been this exacerbation of inequalities. That has been truly and seriously highlighted by this pandemic. We are now basically two worlds when it comes to people who do or do not have economic well-being, and I suspect that nearly everyone listening today is among those who have not lost their jobs, the most serious consequences may be the health and loss of life for those they are close to, but we also have the assets so that we can meet our needs not only today, but we anticipate for the near if not distant future. And as we all know, that's not true for a vast, vast majority of the world's population.

So how are we going to address that? What roles will technology play in addressing that? And what are some of the trends that this pandemic is underscoring in terms of technological development?

First of all, it is strengthening the digital economy. Online retailing is booming. In fact, the digital economy, from the perspective of the stock market, is booming. NASDAQ is higher than it was before the onset of the pandemic.

Large chains are being seen as essential businesses while small businesses are being kept closed, and that's going to create significant disruptions in who survives and who does not survive and what is the character of our life as we come out of this pandemic.

The power of the Information Age oligopoly is growing dramatically, while the efforts to rein it in have been weakened. Furthermore, there is an increase in the movement toward a surveillance economy, an increase in the pushing of tech solutions as being the way in which we are going to solve nearly every issue in our society.

As has been the case for many years now, there is an exaggeration about what tech can and cannot do. We have been watching some exaggerations of what AI can do in helping mitigate some of the issues that are raised by the pandemic, solving the public health crisis itself, or helping to find a cure, but those exaggerations are often way ahead of what we're actually witnessing. Yet it is true that tracking the disease—digital tracking in particular, using your cellphone—is actually going to be one of the greatest tools we have to at least mitigate some of the effects of the pandemic.

Interestingly enough, there are a few companies, such as Moderna, which two days ago announced results from a drug trial that showed a significant number of those who had used the drug—when we're talking about significant numbers, we are talking about trials with eight and ten and 25 people in a group—showed the production of antibodies that were similar to those that we see in people who have actually had the disease. Of course we still don't know whether these antibodies demonstrate an effective vaccine against getting the pandemic again, getting the coronavirus a second time, and some of the anecdotal evidence, such as that from the naval carrier USS Theodore Roosevelt is that at least half a dozen sailors who had coronavirus have gotten it a second time, and there are similar anecdotal stories coming out of Japan.

I have taken a particular interest in the governance of emerging technologies because I recognize that these technologies do offer our greatest hope for solving some of our most serious problems. The hope is that the UN Sustainable Development Goals can be met by 2030, and that is certainly not going to happen unless we get major catalysts from technological tools.

But I am—as is Antonio Guterres—concerned about the dark side of technology and how we're going to mitigate that. I have also been concerned about the lack of agility in international governance, the need for multi-stakeholder input. We are in a situation right now where our world is being rapidly remade by emerging technologies, and yet very few people have an actual say in which technologies get deployed.

I have been concerned about the complexity of the challenges and about the fact that many of the people, particularly in public life, who have to make the decisions do not really understand the technologies they're making decisions about. Furthermore, the challenges are so complex that even those of us who have a fairly trans-disciplinary appreciation of what's going on don't fully understand, nor can we predict, how this is going to unfold.

There are tens of thousands of research trajectories in emerging technologies, and nobody knows when the breakthroughs will happen, in which order those breakthroughs will occur, and how they will impact each other, the synergies that will be created out of the ways in which they interact. So nobody can really predict how this Information Age is going to unfold and which of the challenges are going to need our attention most quickly.

Finally, I'm particularly concerned that there's a lack of good faith brokers. In this breakdown of trust significant portions of the populace no longer trust traditional experts, they don't necessarily trust science, they don't trust those of us who try to develop expertise and work to educate and help others understand the challenges at hand. This is a serious issue. I'm not sure how you move to this dramatically transformative world if you cannot establish a degree of trust because without trust you don't have social cohesion.

There has been a mismatch in the speed of technological development, scientific discovery, and the deployment of new technologies, and this mismatch has been with the slow pace of traditional forms of governance. The slow pace is endemic to most of the institutions that are in place. It is exacerbated by the lack of knowledge about the technologies that need to be governed by the lawmakers, and in the international realm another contributing factor is the fact that we have a very weak multilateral system in serious need of reform. Furthermore, it's not only a weak multilateral system, but in many cases the multinational corporations have much more power than all but a few countries.

Technological reform has always been stymied by what is referred to as the "Collingridge dilemma." The Collingridge dilemma goes back to a 1980 book in which David Collingridge said that it is easiest for us to regulate, to shape the development of a technology very early in its history, but early in its history we don't really know very much about how it will unfold, and by the time we do know how it will unfold it is so deeply entrenched in the society that it is often too late to alter its overall trajectory, its overall influence, and its overall power.

This Collingridge principle has been taken to heart by many legislators in a kind of binary sense, where they shake their heads as to whether we can do anything. But I and most of those of us who believe in the ethics of emerging technologies and the governance of emerging technologies reject this binary thinking. We do believe that you can institute ideas very early on, some of which will take seed and some of which will lie fallow, but you can nurture those ideas and you can see which can actually flourish over time.

And we do have a little bit of time in some of these areas. Artificial intelligence and its autonomous capacity to take authority and decision-making isn't that deeply entrenched yet, so that's an area, for example, where there's plenty of time, or at least some time. But that doesn't alter the fact that we need to begin addressing these issues now.

When we organized the International Congress for the Governance of AI, the main idea in our minds was to get us beyond broad values and individual policies and begin seeding the creation of mechanisms that could effectively govern these emerging technologies. In that regard we instituted some experts meetings around the world—one was in Delhi, another in the United Kingdom, another at Stanford University—in which we brought together individuals with some expertise to come up with some proposals for how we might move forward.

One core proposal that I hope to talk a little bit more about this morning is global governance networks, which is an idea for multi-stakeholder, multilevel trust networks.

Let me just stop there and see what else we might discuss. Thank you very much.

JOEL ROSENTHAL: Wendell, thank you very much. You have given us a lot to think about.

Let me try to extend the conversation in one particular direction. The idea of these emerging technologies, they are by definition global in scale, so what we're looking at is the governance of these technologies in some global capacity, and what we're seeing now in the international world is a fracturing of the global system. In particular we can look at it as the West and the rest, if you will, or the West and the East. I like to think of it as we have open societies and perhaps those that are moving to a more closed system.

That's a long way of asking, how do you think about the different political systems, in particular China as one model, and how that fits in with your concept of some kind of global governance mechanism?

WENDELL WALLACH: Thank you. That's a great question, Joel.

I think as you and many of the people on this call already know I have been going back and forth to China—until travel was shut down—roughly every two months or so, and now they're dragging me into some of these online forums in lieu of the fact that various events will not take place in China over the summer and fall. The nice thing about doing that is you get a very different perspective of what's going on in China than that which is purveyed in the United States and in a lesser sense in Europe.

Let me start with a little example here. In the age of this pandemic we're all very forgiving of the fact that the television news shows we show aren't very well engineered, that oftentimes people are coming on with Zoom, and we can't hear them very well. We're willing to take that into account. We're willing to take into account that there may be people—and I don't include myself in that—who don't have their pants on, but we'll never see that when they are in these meetings, something that would be totally unacceptable in a traditional meeting.

We have become increasingly intolerant of everything done by those who do not share our ideology, and we get overly carried away by emphasizing what they do wrong and perhaps under-generous in terms of what is done right, and that has created a condition in which we aren't able to give people their due, but we are so critical that even when something egregious takes place it has no weight because our adversaries are being looked at as not understanding what we're doing or why we're doing it. I think that's no less true in the way in which China and America view each other as it is in how we view those of different political persuasions within our own countries.

The interesting thing for me in being in China is getting an on-the-ground feel for what's going on. One of the things I see that's going on is the vast bulk of the population is very happy with their government, much more happy than I would say Europeans and Americans are with our own governments. It does seem to be a genuine happiness or a genuine pleasure.

Sure, there are people who resent the fact that they can't openly criticize their government, that there is not as much tolerance for criticism or individual expression as there may be in the West, but they also are witnessing the fact that their government has taken 800 million people out of poverty in the last 40 years. There's nothing comparable anywhere else in the world and, at least in the present, things are getting better. That's not true obviously for the Uyghurs, the Muslim minority which is being repressed. That's not true for everyone. But I think we lose sight of that.

Another thing we lose sight of is this discomfort that China has with the stress on human rights, even though they have signed the UN Universal Declaration of Human Rights. In this series of lists of values for the governance of artificial intelligence, the Chinese emphasize harmony over rights. That's deep within their culture. It is not being done just to quiet criticism of the government. It's a great deal about what that society is, and they have been through tragedies in their past that make them very uncomfortable with the kind of individual self-expression that they see as undermining communal solidarity.

Again, we're not taking the time to have those conversations, to see whether we can come up with a consensus about where human rights really should be the framing reference for the development of international governance and where perhaps what it means or how human rights might be deployed might not be the same in all societies. I'm not trying to be an apologist here for China. I'm just trying to say that it looks very different to me once I have been there.

The other interesting point is that even though China is ruled by one party I think that beyond criticism of the government the Party listens amazingly closely to the wants of their citizens. The great bulk of the citizens are of one ethnic background, so there may be more uniformity there, but I sometimes think that China is perhaps even more responsive than our so-called "democratic open" societies.

That's speaking about China itself. But yes, there are challenges with China, there are challenges with societies that aren't open, and there are even more serious challenges with autocratic nations that don't even try to create illusions of being beneficent. I'm particularly concerned about that with both the beneficent and non-beneficent but also even the democratic countries in terms of the adoption of technologies that increase the ability for surveillance.

JOEL ROSENTHAL: Wendell, you have a big idea—this isn't just an observation here—of what to do with this, which is the convening of a first International Congress for the Governance of AI.

I wonder if you could just say a little bit more about your concept for that. A "Congress" implies representation, so there will be representatives, not only from around the world, but as you were saying before multi-stakeholder or multi-sector, so I'm imagining it's not just national representation and so on, but we have representation from the tech companies and civil society, the notion of some kind of inclusive conversation about where we're going with emerging tech. So maybe you could just say a little bit more about the Congress itself, and how you're thinking about that process.


Let me first give a little bit of background. At the time when we started organizing an International Congress for the Governance of Artificial Intelligence there was actually very little happening internationally. There were certainly meetings on cybersecurity, there were meetings on digital standards, there was a little bit of discussion around privacy, there were certainly meetings around whether lethal autonomous weapons should be restrained. That was the context in which we walked, and we began organizing a Congress that we saw as a multi-stakeholder forum. So business would be represented, governments would be represented, civil society would be represented in many different forms, including educators and non-governmental organizations (NGOs). We also were particularly concerned about having significant representation from underserved communities, including small nations and indigenous populations.

Since we began the organization of that there has been a dramatic shift, and the shift is that a lot of multilateral organizations are jumping into this space: the United Nations, the Organisation for Economic Cooperation and Development (OECD), the G7 with leadership from Canada and France, the World Economic Forum has been weighing in on that, and the G20.

That has shifted a little bit what the Congress may or may not be because in effect the governmental leaders and to some extent the corporations—with a little bit of representation from the NGOs—have started organizing independently. Therefore, the Congress that we were planning that was to have been in Prague in April has been postponed to Prague in October, and I suspect we will postpone it again until next May because I doubt many people will be traveling in October, but we're still monitoring that. That has made our stress much more on the multi-stakeholder dimension of bringing people together. We do believe that if the stakeholder groups are significant enough, then that will get enough of the national and industry leaders to also show up. In that regard we have been looking at various proposals and initiatives that would help nurture multi-stakeholder development.

We are not in a position to have a Congress where we can determine who the delegates will be or whether they are proportional, so this is not going to be a "Congress" in any true legislative sense, and it may actually be very limited in terms of what comes out of it or what kinds of initial clout it has.

My own concern is a bit broader than that. I think we need to begin laying the foundations for the 21st century institutions now, and in many cases they are going to need to replace some of the international multilateral institutions we have that are not very effective. Particularly in technology they need to be much more multi-stakeholder because we can't be "reinventing the human species"—perhaps even out of its own existence—without actual participation from the world citizenry. So the multi-stakeholder dimension becomes really important.

In that regard we have been working particularly hard to ensure that element. We have been working particularly hard to ensure that China will be present, and they did commit to me that they would be present. We have even invited Madam Fu Ying to be one of the vice chairs of the Congress, which was to be chaired by Michael Møller, who stepped down as director-general of the United Nations in Geneva.

Another initiative we have been engaged in has been trying to help nurture a network of stakeholders from small nations, underserved communities, and indigenous populations. I don't belong to any of those communities, but that hasn't altered the fact that actually by making it clear that we wanted their representation—and we would even have a training session for those who were not digitally literate in advance of the Congress—that we have been able to stimulate developing that a bit. Indeed we even had a foundation—which wants to remain nameless—that has given us a matching grant for scholarships to bring significant representation from that community to the Congress.

Whether the Congress happens or not those are seeds that need to be germinated, and that's a lot of what we're focusing on at the moment.

JOEL ROSENTHAL: That's great. Wendell, we're going to get to the questions soon, but I have one last question that is building on where you left off with regard to the Congress.

It would seem to me, in addition to political interests, which you have discussed, that there are massive economic interests, commercial interests, at stake in terms of the development of these technologies in artificial intelligence and so on. One question for the Congress is, how are you thinking about representation from the tech world, those with commercial interests at stake, and how do they respond to this initiative? Do they see it as threatening, or do they see it as something that is perhaps helpful and complementary to the development of their businesses?

WENDELL WALLACH: As you can imagine, there are many corporations, and they have very different value structures and very different goals. I will use Microsoft as an example of a company that has tried to put its values upfront, but with other companies their public and fiduciary responsibility to their stakeholders is foremost in their minds, and they have been quite happy to stymie governance as much as they can. So it's very mixed in terms of what participation they will or will not have.

Our hope was that enough of them would participate in the Congress so that the others would see that it's important, and we have had partnerships with various corporations such as PricewaterhouseCoopers. We have had participation from Microsoft and Google in some of our experts meetings. That's something to be nurtured.

I don't think it's going to be easy, but I also think that if you actually have multi-stakeholder input and any consensus, the corporations don't want to be seen as ignoring the voices of the citizenry. They may also want to try to persuade them or educate them, depending on corporate perspective, toward their viewpoint and why perhaps the way they feel that digital security should be organized makes more sense than some of the public concerns that are coming up in the General Data Protection Regulation (GDPR) and other areas.

But I also think that a lot of the corporations see that their livelihood is going to come from implementing long-term platforms that actually put a little bit of teeth on the values that have been most explicitly expressed as those the public wants. So they will participate, sometimes begrudgingly and sometimes as an opportunity to create an alliance with the other stakeholder groups.

Again, I don't think this is an overnight venture. I think we are truly taking first baby steps toward building the international governance mechanisms for this new century.

Will it succeed? Will it fail? It has a lot to do with what kind of world we're creating. This world could very easily deteriorate into total distrust and chaos at this moment, but there is also this opportunity for cooperation, and exactly how we're going to come out of this global pandemic is far from clear at the moment.

JOEL ROSENTHAL: I'm going to turn now to Alex Woodson. Alex, can you help with some of the questions from the audience?

ALEX WOODSON: Thanks, Joel.

This question is from Joseph Carvalko. He writes: "Are we speaking more about a constitutional convention, which designs the governing architectural institutions and develops the legislations needed to regulate AI in the future?"

WENDELL WALLACH: I think it's much too early to be talking about that, but thanks for the question, Joe.

We're a long way from a constitutional convention. We can't even get any kind of a treaty on limiting the deployment of lethal autonomous weapons, even though I think those who have argued against such weaponry have won the battle from the perspective of public opinion, but they probably lost it in terms of getting the major international powers onboard.

So I don't think we're in the realm yet of any kind of a constitutional convention, but I think we're going to get beyond the broad values that have been expressed and perhaps start looking at ways of implementing those values of first steps toward putting in place institutions, and then perhaps down the road we can begin to talk about new actual international mechanisms that have a little bit of clout in terms of enforcement.

But at the moment neither the leading nations nor the corporations want any kind of international mechanism that has much enforcement capability, and this is already becoming a problem for the European Union in GDPR and some of the other mechanisms that are already being put in place.

ALEX WOODSON: This is from Joel Marks: "Do you anticipate participation in the Congress by any religious entities?"

WENDELL WALLACH: Yes. I could say more, but we already have the Catholic Church, for example, that has jumped into the artificial intelligence space. And we do have as advisors and as people who are going to speak at the Congress people involved with the Catholic Church. We didn't yet have leaders from other denominations or other religions, but I suspected that we were about to. We were talking with a good number of them.

ALEX WOODSON: This is from Bill Armbruster: "Has the Trump administration shown any interest in having the United States participate? Also, have you contacted the International Chamber of Commerce?"

WENDELL WALLACH: I never contacted the International Chamber of Commerce. I have no idea whether they were contacted or not, but we had a number of interactions with people in the Trump administration. Just a few days before we postponed the Congress, I was at the OECD for an AI meeting in Paris. At that meeting—strangely enough, it was just a few days before our postponement—even though I was keeping an eye on the coronavirus we were still talking to leaders in terms of whether they would show up or not, and there was leadership from the United States there. In fact, the chair of the OECD's meeting on ONE AI is from the United States, and they said they were considering it.

It was in the form where they had actively been considering it but still had not made a final decision about whether to come or not. If there had not been a pandemic, it looked like we were building the kind of momentum for the Congress that a lot of the bench-sitters would have decided in the last minute that they would come, that they needed to be there. But we didn't have any upfront commitments from the Trump administration.

ALEX WOODSON: This is from Eugenio Garcia: "What role do you see for the United Nations in AI governance? Will the Congress in Prague have any outcome shared with the United Nations as a platform for international discussions on global challenges?"

WENDELL WALLACH: Again, we are moving in a more pluralistic framework than one where it's determined who or what will have roles. We don't really know at this point. There are suddenly a lot of institutions—including the United Nations—that are jumping into this space, and yet it's not clear what they will actually take responsibility for.

We have more than 55 lists of values for AI, and by some counts there are 88 to 100 documents that could be reduced to lists. So a lot of people are jumping into this space, but it's not clear what the United Nations will take responsibility for, what the IEEE will take responsibility for, or what the World Health Organization will take responsibility for.

Our way of looking at this was to help map who is taking responsibility for what, flagging the gaps, and seeing which of those gaps might be addressed by existing institutions. That's the first point.

The second point is we already had significant participation from the United Nations and from representatives to the United Nations from individual countries. The United Nations was not in a position of giving a flat endorsement of this project, but that didn't alter the fact that there were individuals with major positions within the United Nations that were coming to the Congress and expected to be presenting there. I think they also were looking for what the United Nations should be.

In addition, the secretary-general has a higher-level panel on digital security that I and many others on this call participated in and contributed to its output, and some of the individuals on that panel were part of the organizing committee for the Congress.

This is all by way of saying that there is a good number of us who float from organization to organization and have engaged in a process of weaving the different organizations together and trying to facilitate that we work cooperatively and not competitively.

ALEX WOODSON: This question is from Zoe Stanley-Lockman in Singapore. The question is: "How centralized do the expert groups envision the mechanisms to be, and how can international governance efforts avoid turf wars with extant organizations that see themselves as inhabiting this space?"

WENDELL WALLACH: There are two things going on. There are people generating proposals, and there are initiatives going on all over the world to try to flesh out the values that have been espoused by different organizations. We don't know exactly how that's going to unfold. We were largely looking for, first of all, methods to coordinate the different initiatives or at least get them working together and aware of each other's activities and who was taking responsibility for what, and second, addressing those issues that perhaps were being overlooked.

That's our concern here. It wasn't that we have thought through it in great detail. We had proposals, particularly this global governance network, that we hoped would be acted upon as a first step, but exactly how that unfolded would again be determined by who participated and what their intentions were. I looked at myself as a facilitator and not a determinator of what kinds of mechanisms would be put in place.

ALEX WOODSON: I'm going to ask a couple of questions to bring it back to the pandemic. One is from Carnegie Council's Billy Pickett, just very general: "How has artificial intelligence been used in the context of the COVID-19 pandemic?"

Then, Christopher MacRae asks more specifically: "Are there any states that have started to use AI to trace the virus?" He writes that he is in Maryland, close to the National Institutes of Health, and he says that Maryland has not begun to scope the data that is needed.

WENDELL WALLACH: There are uses of artificial intelligence to track the virus. Some of these maps that are being created have been generated with the help of artificial intelligence.

I think there has been on one level disappointment that artificial intelligence doesn't do more, but that's because artificial intelligence isn't that intelligent yet. It is remarkably helpful in helping us manage large bodies of data and perhaps ferreting out relationships that we might miss otherwise. But its greatest contribution so far has been in the ability of researchers to utilize it to help ask some of the questions that they need some hard empirical information to pursue.

The main way in which artificial intelligence has come into play in this space is in contact tracing, and the value of contact tracing—hopefully everyone understands it—is essential for us to ensure that we don't have flare-ups of the pandemic, but it's heavily dependent upon personal data. The good news is that there have been researchers looking at ways that that personal data stays on your phone and does not become the property of some other entity that uses it for their own purposes.

ALEX WOODSON: I will just ask a question that I am curious about, Wendell. We have a lot of great questions, and hopefully we can do something to answer these off the Zoom call, but just one question maybe to wrap up because it's such an important point.

You mentioned how the pandemic is increasing the rate of starvation for hundreds of millions of people. I'm just wondering if AI can be used to help some of these issues related to the pandemic or not related to the pandemic? How do you see AI helping some of these huge issues that we're dealing with now?

WENDELL WALLACH: I do think it can help. It can help in supply chain management, it can help in tracking where there is need, it can be utilized by those who are trying to address the needs of those who are in difficulty to know in advance what they should anticipate. Again, AI can help in all of these regards. Emerging technologies have become fundamental tools for us to address all kinds of challenges.

The difficulty is that it's not so intelligent yet, so the intelligence is really collective, it's the way in which the tools that do exist are being used by the experts to try to answer the questions they need so they take appropriate steps.

JOEL ROSENTHAL: We're coming up to 1:00, so we'll conclude the formal session. As you said, Alex, I hope people will feel free to reach out to us and to reach out to Wendell for follow-up questions.

As we have discussed, the International Congress will be proceeding, and we'll keep people informed in terms of the future of that program.

Wendell, I just want to thank you in particular. Not only are you a wise observer of what's happening in the emerging tech space and in the governance space, but I really admire the fact that you have stood up to do something. The whole idea of organizing the international Congress is a big idea, and you have stepped up to it. The Carnegie Council is proud to be a part of it.

I will just conclude by reminding people that the webinar has been recorded, and we will post it to the Carnegie Council YouTube channel and also to our website. We will have another program next week, where our Senior Fellow Nick Gvosdev will be the host, and the topic is, "What Americans Think About Foreign Policy," and that will be based on research published by The Chicago Council on Global Affairs and the Eurasia Group Foundation.

Thank you all for joining us, and we hope to see you next week.

You may also like

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.

JUN 27, 2024 Podcast

AI, Military Ethics, & Being Alchemists of Meaning, with Heather M. Roff

Senior Fellow Anja Kaspersen and Heather Roff, senior research scientist at the The Center for Naval Analyses, discuss AI systems, military affairs, and much more.

JUN 3, 2024 Podcast

The Intersection of AI, Ethics, & Humanity, with Wendell Wallach

In this wide-ranging discussion, Carnegie Council fellows Samantha Hubner & Wendell Wallach discuss how thinking about the history of machine ethics can inform responsible AI development.