Technology Governance and the Role of Multilateralism, with Amandeep Singh Gill

Feb 7, 2023 91 min listen

In this AIEI podcast Carnegie-Uehiro Fellow Wendell Wallach and Senior Fellow Anja Kaspersen are joined by Ambassador Amandeep Singh Gill, UN Secretary-General Guterres' envoy on technology. During this engrossing conversation, they cover some of the most critical political, security, technical and ethical issues in the current, global discourse on technology governance and the need for new normative frameworks to mitigate against harmful technological applications and secure what the UN refers to as "Digital Commons." Gill also shares his unique insights from a long career as a multilateral diplomat and leader in digital governance and arms control.

AI Ethics Technology Governance Spotify Podcast Link AI Ethics Technology Governance Apple Podcast Link

WENDELL WALLACH: Welcome. Anja Kaspersen and I are so pleased to have this Artificial Intelligence & Equality Initiative (AIEI) podcast with our friend and distinguished colleague Amandeep Singh Gill, who was appointed UN Secretary-General Guterres' envoy on technology in June of 2022. Amandeep, who holds a Ph.D. in nuclear learning in multilateral forums from King's College London and a Bachelor of Technology in electronics and electrical communications from Panjab University in Chandigarh, joined India's Diplomatic Service in 1992. From 2016 to 2018 he was India's ambassador and permanent representative to the Conference on Disarmament in Geneva and chaired the group of governmental experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems for the Convention on Certain Conventional Weapons (CCW).

At that time, Anja Kaspersen was in a complementary role as a director of disarmament affairs at the United Nations in Geneva, and we will return later in our podcast to discuss with both Amandeep and Anja some of the work that was accomplished.

From 2018 to 2019 Amandeep served as the executive director and co-lead of the United Nations Secretary-General's High-Level Panel on Digital Cooperation, after which he was chief executive officer of the International Digital Health and Artificial Intelligence Research Collaborative (I-DAIR), a truly innovative health initiative based at the Graduate Institute of International and Development Studies in Geneva.

Welcome, Amandeep, and congratulations on your recent appointment as the UN secretary-general's envoy on technology.

AMANDEEP SINGH GILL:
Thank you very much, Wendell.

WENDELL WALLACH:
Before we discuss your role and what you hope to achieve, perhaps you can help our listeners understand why progress on technology policy and governance of the United Nations is so slow and so frustrating.

AMANDEEP SINGH GILL:
That is a great question to start our conversation with.

The United Nations and its forums work on the basis of certain principles that have been agreed by the international community. One of those is the sovereign equality of all states. Those principles ensure inclusion, they ensure that every country has a say in what happens at the United Nations, but at the same time there is a tradeoff. Often this means seeking consensus over a period of time and building understanding of issues across countries which may be at different level of understanding on particular issues.

This structural feature of the United Nations becomes quite pronounced when we are discussing technology issues that move fast and that are very advanced in terms of their development and the consideration of their policy implications in some countries but not so much in other countries. When you bring these issues to the United Nations it is always not as fast as some stakeholders would like them to be, but I think this is a tradeoff that we must embrace because at the end of the day if we are just having like-minded conversations in small groups in small geographies where tech considerations are quite advanced, then we are leaving some people behind, and in the United Nations it is important not to leave anyone behind.

This is a general reason for why things are slow, but they need not be slow each and every time. There are opportunities where we can accelerate the consideration of issues by thorough preparation, by engaging multiple stakeholders beyond government—which means civil society, academia, the tech community, and the private sector—and by taking full advantage of some of the flexibilities that the rules of procedure within the UN forums provide us.

You see this at play increasingly, whether it is discussions around digital cooperation or discussions around gender issues, climate change, etc., where you have this interplay of multilateral forums and multistakeholder communities and discussions on the margins or linked to these intergovernmental discussions and multilateral forums. I am more or less optimistic about how we can handle technology in UN forums but still mindful of these constraints.

WENDELL WALLACH: You have had so many roles, but perhaps it would be helpful for our listeners if they understand the difference between those projects that have been initiated by the secretary-general, those projects that have actually been authorized by the General Assembly, and projects, initiatives, or bodies that meet within the United Nations but are not formally part of the United Nations. Can you help us understand those distinctions?

AMANDEEP SINGH GILL: What you describe so eloquently are these three types of initiatives. If we get the orchestration right, then we can move faster and qualitatively differently.

In the last few years, the focus on technology issues has been there. For the secretary-general it is a priority alongside key challenges around peace and security, climate change, gender, etc. He has driven some of the discussions by taking initiatives that were within his authority, so setting up the High-Level Panel on Digital Cooperation, these 22 members—11 men and 11 women—who came from all walks of life. The panel was chaired for the first time by two members who were not from the government sector, so Jack Ma and Melinda Gates, who came from the private sector and from philanthropy.

This was an initiative of the secretary-general, but it led to a process in which the Member States started to participate. They discussed the conclusions, there was a roadmap on digital cooperation that was put together, and now we are moving into a process that is determined and decided by Member States through a Resolution of the General Assembly, and that Resolution passed in October of last year and calls for a Summit of the Future in 2024.

As part of the intergovernmental process of preparing for that Summit of the Future digital issues constitute a track in themselves, so they are an important intergovernmental track for this Summit, and the idea is to come together with something called a Global Digital Compact (GDC), a kind of charter of principles for an open, free, secure, and inclusive digital future for all, so what that looks like, what kind of principles should be shared across all nations, how do we go beyond principles to land the guidance of the international community into the practice of governments, private sector, et cetera.

WENDELL WALLACH: Before we go into greater detail on the Global Digital Compact, I would like to touch upon two things with you. First, I was interested in your noting that the panel led by Jack Ma and Melinda Gates was one of the first times the United Nations had a higher-level panel led by people who were not diplomats or governmental leaders, and that raises this question of how essential that is when we are talking about technology governance or whether or not we think the multilateral process can be sufficient without this participation by tech leaders.

Some of us are very skeptical of multilateralism, as you know, in this context. We think that we are dealing with corporations who are often much more powerful than all but a few states, and we are also deeply concerned that few of the states really understand the technologies at all. I wonder if you can illuminate us a little bit on how you understand the multilateral process and whether that is even the right place for us to try to formulate policies about emerging technologies.

AMANDEEP SINGH GILL: In practice when states are engaged in multilateral diplomacy, negotiations, and policymaking at the United Nations they represent the interests of all stakeholders, including private companies, that are on their soil. That is in theory, but in practice often there is a gap because UN negotiations are led by diplomats from foreign ministries who may or may not have the resources or the inclination to coordinate with other ministries that may be dealing with the substantive aspects of digital issues, information and communications technologies (ICT) ministries, or ministries of finance, ministries that are engaged with building up the digital infrastructure in the country. That is why you have to find ways of engaging with the multistakeholder communities beyond this established channel of ministries representing these wider interests.

The power element is something that people have woken up to, but it has been around. The power of transnational corporations as actors in international relations has been recognized by international relations academics from the 1960s onward, but digital is a different beast. The market capitalization and the concentration of wealth resources and power in a few companies is of a different nature. It has woken up people, and therefore you see an impact. Even governments beyond the dozen or 15 governments that have been into this for a while have woken up to this and are orchestrating their positions much better with the private sector players and others back home.

Still there is a gap, and that is where I think these multistakeholder consultations—I should hasten to add that this panel was led for the first time by two nongovernmental people, but there is a tradition of multistakeholder engagement going back to the days of the World Summit on the Information Society 2003 and 2005.

WENDELL WALLACH: As you know, I have been a champion of putting in place some mechanisms for the international governance of artificial intelligence (AI). I would love to see that in the United Nations, but I have also been skeptical whether the United Nations is so constructed that it is possible and whether we should be looking outside of the United Nations for creating mechanisms that function outside or in partnership with it.

I don't want to put you on the spot, but you are obviously literate and deeply submersed in these questions around the governance. How do you see that? Do you see that the United Nations can take the leadership role that we would like to see it take obviously, because it is our one global institution that can represent everyone, but if it cannot happen within the United Nations—and the slowness of some of these processes makes many of us skeptical—we are going to have to move outside of the United Nations and put something else in place. Share with me a few of your reflections on that.

AMANDEEP SINGH GILL: Let me give you a few reasons why the UN's role is important. First, there is no other universal forum where every country is represented, where you have a Charter, a Charter that is still inspiring, that reflects shared human values and principles, and where you have a variety of forums where people can come together also around specific teams, whether it is health, environment, gender issues, meteorology, you name it. You have these forums that have been built up over decades, in some cases more than a century and a half. So you have that basis, that universal convening capacity, to bring people together broadly but also in specific areas, and that is unique. That is unparalleled. Nothing matches that.

The second reason why the UN's role is important in the digital future is that the United Nations has some basic touchstones that are critical to managing the problem of misuse of digital technologies, and those are firstly the human rights framework, international human rights law (IHL) starting with the Universal Declaration on Human Rights and the various conventions and treaties around human rights. If the digital future is not human-centric, if it doesn't respect human rights, human agency, and human choice, then we are lost, so I think the United Nations has that touchstone.

The other touchstone that the United Nations has is Agenda 2030 or the Sustainable Development Goals (SDGs), these 17 goals that are shared across the international community regardless of income level, where we are addressing certain planetary challenges from poverty and hunger to the green transition to climate change to the empowerment of women and other communities. You have these unique attributes of the United Nations apart from the convening capacity.

The third and final reason that I would say the United Nations matters is that today you have the leadership and the vision and the pathways—including what I mentioned about the Global Digital Compact—to pull different strands together and come up with something that is not exclusive to the United Nations, but at least it is taken to the level of leaders through the United Nations so that it can have more impact when it comes down to the tech sector and the private companies in whatever region of the world that you are in. If we just focus on one region and do something nice there and if it doesn't have an impact on the next billion to come online in Africa, in Southern Asia, and in Latin America, then we are missing something. We have that opportunity as well today with the UN's role.

WENDELL WALLACH: You have emphasized the strengths of the United Nations and what the United Nations brings into play. You don't have to comment on this, but I think for many of us there is this concern with the UN's weakness—the structure of the United Nations, the countries that have power within the United Nations, particularly in the Security Council and other areas, and the difficulty of creating consensus within the United Nations and whether that will obviate some of those clear benefits that you have described.

AMANDEEP SINGH GILL: Definitely. I am not blind to them. In fact, the United Nations reflects the world as it was in 1945. The world has changed a lot. The rise of the Global South and the structure of what the Security Council is today does not reflect the realities of the world today or what the world will be ten years from now, so there has to be a constant attempt to reform the United Nations, including the Security Council and to adapt our methods to be more relevant to the younger generation. I will be the first one to admit that. We have to do much better as the United Nations.

I think we have to invest more in terms of our own capacity to understand these issues, the digital capacity. How many data scientists do we have? What is their age profile? Those are the hard questions we need to be asking ourselves: How coordinated and collaborated are we across different organizations? Are we competing for funding? Are we coming up with similar ideas but not collaborating with each other? Those are questions that in fact my office as part of its work is looking into as well.

WENDELL WALLACH: Wonderful. Before I bring Anja in and talk about some of the work that the two of you focused on together, let's flesh out two aspects of your present role. First, explain to us a little bit what is the role of being the envoy on technology—it is a totally new position—and what do you hope to accomplish?

AMANDEEP SINGH GILL: The briefest description I can give is the focal point within the UN system for technology issues, in particular digital cooperation. Just as governments today are appointing tech envoys or have chief digital officers or chief data scientists, within the United Nations a need was felt that there should be a focal point on these issues at a senior-enough level so that private sector, civil society, and others who are bewildered by the United Nations—and some of your questions reflect that—have a single window that they can come to.

Part of that is also inward-facing, which means coordination and coherence across the UN system. We have—and this is a good development—different agencies taking the lead on different digital issues, the International Telecommunications Union (ITU) on connectivity, capacity building, and other issues, and the United Nations Development Programme (UNDP) on the development side of digital. So how can we put all those different capacities together and be more powerful, be more impactful on the ground so that a coherent side of the role is there?

In terms of substances, I would clearly divide this role into two big pillars. One is the governance pillar, avoiding misuse, avoiding fragmentation of effort, fragmentation of internet, and reinforcing digital cooperation around societal and economic implications, and the other side is how do we leverage these opportunities to accelerate progress on the Sustainable Development Goals, so how to leverage the digital transformation on the development side. This is briefly the role that this position entails.

WENDELL WALLACH: Do I understand correctly though that you are totally focused on what I would call the "outwardly turning missions" of the United Nations as opposed to the way the United Nations itself uses technology?

AMANDEEP SINGH GILL: Yes. It is not about putting laptops and desktops, ensuring cybersecurity, and running the UN's data centers. I have very capable colleagues who are looking at the information and communication technology (ICT) side of the equation, so this is more about the policy side of the equation, like what does the metaverse entail in terms of human rights implications or surveillance of individuals and societies, what is online harm and how can countries collaborate better to address online harm, so those kinds of issues and not the ICT nuts and bolts of the United Nations.

WENDELL WALLACH: There have also been UN initiatives that are directed at using technology in explicit ways to address the SDGs. Are you working with those teams, or are they a separate function within the United Nations?

AMANDEEP SINGH GILL: Definitely. I work with all these teams. We look at how we can empower them, how we can add value to their work, and how we can enable more partnerships for them. If you look, for instance, at this aspect of supporting Member States on their digital transformation, we have started an effort on building what we call the "common blueprint" on digital transformations, so pooling the expertise and knowledge of different UN entities on various aspects, whether it is the UNDP/ITU tools on maturity assessments, or the UN Conference on Trade and Development's tools on e-commerce, etc., putting them together into a kind of end-to-end guidance on the digital transformation. We work with all of those teams and are in a sense a service entity, an enabler, a catalyst for some of these efforts, also putting more political attention to it.

I have been engaged with the G20 presidency of India, followed by Brazil and South Africa, and previously Indonesia, to see how we can bump up the Digital for Development agenda as a political priority. That is a cross-UN type of issue. If that results in initiatives, greater attention, and greater resources on Digital for Development, that has positive implications for all those teams working within the United Nations to deploy digital technologies responsibly, inclusively, and in an impactful way.

WENDELL WALLACH: If I have understood you correctly, you are not saying that you are the head or in charge of all those initiatives, but they are more distributed. Groups like UN Global Pulse and some of these other initiatives you have talked about have their own mandates, but you are working together with them.

AMANDEEP SINGH GILL: Exactly.

WENDELL WALLACH: Wonderful.

One more question. Tell us a little bit more about this Global Digital Compact since, as I am understanding it, it is the next big project. The fact that the General Assembly has in effect endorsed or authorized that raises it above many of the other activities that at least the SDG's office is able to initiate. Can you tell us a bit more about the Global Digital Compact, how you understand it at this stage, what you hope it will achieve, and what is still unclear as you move toward putting it in place?

AMANDEEP SINGH GILL: It will also allow me to answer one of your previous questions about what success looks like.

The international community has come up with Compacts in a couple of other areas—refugees and migration—where a need was felt to refresh thinking on norms and also to build a regular conversation under a large tent, bringing more communities into the conversation. You know treaty-making in the United Nations is difficult, and norm-making is not easy and takes time, so is there a way to refresh our thinking on norms without getting into that? That is a sense the origin of some of these Compacts, and you see how they have changed the landscape. On migration and refugees, for instance, there are regular consultations, there are large forums every two years, every four years, so you are able to implement some of the earlier norms in a more inclusive and impactful way. This is the thinking for the Global Digital Compact as well.

How can we align approaches on the governance of digital technologies around the world, in a fragmented world today so that we are able to not leave any gaps in governance and also not have people work at cross purposes in different domains? The process is going to be intergovernmental, so there are two countries that are going to be leading this process, Sweden and Rwanda. They are called "co-facilitators," and they will hold their first meeting in fact a week from now, and a process of consultations will continue for about six months. Then at some stage it will tip over into an intergovernmental negotiation on the content of the Global Digital Compact.

It is too early for me to speculate on what that content will be like, but the secretary-general has given some areas that we should reasonably expect to be part of that Compact, and that includes: universal connectivity, making sure that, for instance, every school and every health facility on the planet has internet access—we saw during COVID-19 how important this is for societal resilience and for well-being; then addressing online harm, protecting human rights online as we protect them offline; addressing misinformation and disinformation; making sure that data is protected and people have choices over how their data is used, so this aspect of data empowerment; addressing AI, aligning AI governance with our human values; and also looking at digital public goods and digital public infrastructure as these kinds of "digital commons" that have guardrails and allow people to access services, build their own services, and build their own businesses with low entry barriers because if we have instead of a commons a club kind of thing, then only a few people will benefit. Instead we need commons where more people can take advantage of the digital opportunities.

Those are some of the areas that could fall under the GDC. The hope is that by September 2023, when the ministers meet to set the agenda for the leaders in 2024, we will have advanced sufficiently on these different substantive issues by consulting not just amongst the governments but also outside—civil society, academia, and others—and produced a substantive basis, a background, for the ministers to look at what should be on the agenda, what they should be negotiating. Then we have time after that to flesh that out.

WENDELL WALLACH: This has truly been edifying, and you have put it so clearly and succinctly that I hope this has helped our listeners get a better sense of what is taking place in the United Nations around technology policy as we move forward.

But let's move back a few years, if you don't mind. Discussions on whether autonomous weapons systems could or should be regulated began at the Convention on Certain Conventional Weapons in 2014. In that year they did the first of what were three yearly series of informal experts testimony. I was one of the experts who testified.

At the end of the third year, the topic was bumped up to a more formal group of governmental experts, what was called the GGE on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. If I understand it correctly, the charge of that GGE was to explore the prospect of putting in place binding regulations.

You chaired the GGE and worked closely together with my colleague and co-director of the AIEI project, Anja, who at the time was the director of disarmament affairs for the United Nations in Geneva. While unable to reach agreement on regulations, you were however able to shepherd the 125 nations to agree upon ten guiding principles for the deployment of autonomous weapons systems.

I want to ask you both—and I will start with you Amandeep—how you view those principles today and whether they have been effective.

AMANDEEP SINGH GILL: I think those principles that the 125 High Contracting Parties, as they are called, agreed on are a valuable contribution not just to addressing the specific problem of regulating AI in the weapons space but also broadly how we address the implications of artificial intelligence.

Let me just take one example from those principles to illustrate what I say. When we said that international humanitarian law continues to apply fully to the use of all weapons systems, including any lethal autonomous weapons systems, what we meant is that regardless of the direction that technology may take there are certain essential norms, analog norms, human norms, that we must continue to uphold. We gave ourselves a standard, a touchstone, to see if technology is going in the right direction or not. This was an instance of a technology-neutral policy guidance being crafted together not only by governments, but, as you participated as an expert, many other experts did as well in close consultation with other stakeholders.

Where these principles matter is in terms of the substance but also in terms of charting a new path for regulating digital technologies, which is you get what you can and then you build on top of that. If we go straightaway into treaty-making, that may take a lot of time, and technology moves very fast, we may find it hard to have consensus, and when we do it may be a common denominator, which is not that meaningful. By working on these shared principles and shared understandings of what different things are we start to have an impact on how technology is developed and thought about more quickly than we would through traditional means. I would highlight both the substantive side and the process/procedural side of it in terms of a good legacy to have.

WENDELL WALLACH: If I understand you correctly, you were putting in place building blocks and trying to lay the foundation to talk about the next level of issues. As you have observed what is going on in the development of AI, do you feel those building blocks are being taken seriously by corporations and governments or do you think they have more or less forgotten that they were put in place?

AMANDEEP SINGH GILL: I don't think they have been forgotten. I was involved in an exercise that the Institute of Electrical and Electronic Engineers (IEEE) did a couple of years after the adoption of these principles. We looked at how these were being reflected and how different militaries were looking at the development of these technologies, and we saw that there was reference being made to these principles. So they are having an impact. At least they are being referenced, downloaded, and looked at.

We do not have a mechanism today to actually check except an existing mechanism called "weapons reviews," which some countries abide by. You have the principles, you have the building blocks of understandings and approaches. The next challenge is how do we have an exchange of experience on how we are implementing these principles?

I think this is a crucial point also for the Global Digital Compact. If we do not have these regular conversations, these forums, these designated tables for discussing followup on certain issues, then how do we know that we are making progress, how do we learn from each other? Landing principles in practice gives you some good and bad lessons, and you need a place where you can come back and discuss those, and then everyone goes back and iterates and improves their understanding of the implementation of those principles. I think that is the gap that still needs to be plugged, but the principles and common blocks have given us a foundation for that.

WENDELL WALLACH: Anja, maybe you can tell us a little bit about how you viewed these ten guiding principles for the deployment of AI and not just autonomous weapons systems that Amandeep has just discussed.

ANJA KASPERSEN: First of all, I agree with everything that Amandeep just said. I will comment a little bit to your question, Wendell, and then I am actually going to comment on the chairmanship of Amandeep, which I also think will be of interest for those listening in to learn a thing or two about multilateral diplomacy.

As you alluded to, Wendell, informal discussions were launched as early as 2014, prompted by France at the time, within the UN's 1980—so this is an old treaty—Convention on Certain Conventional Weapons, shortened to CCW. I would opine that discussions and negotiations have evolved a lot since, which Amandeep also alluded to. Quite a few would say that far from where it ought to be at this stage—and the disagreements are obviously still looming large—I think it is still important to recognize just how far the multilateral understanding and collective thinking around complex issues and concepts have come, which is not just relevant in the discussions around military necessity and military technologies and deployments, which are the same types of issues we see in every type of field that embeds, develops, or tries to use AI in accordance with sound societal principles. It is important to recognize that these are shared challenges, not just in the military domain, but the discussions have evolved a lot.

I certainly experienced this firsthand, and I think Amandeep would agree with me that, having been part of the arms control and nonproliferation community for a long time and also focusing a lot on multilateral learning within the arms control environment, and I think particularly now amidst fractured international relations it is important to keep in mind also that when we cogitate issues such as autonomy and autonomous features in AI relevant in a national security setting that is a particular topic as well. Although some of the challenges and opportunities are shared, we are talking about a technology that presents very unique challenges but also opportunities both for territorial defense and national competitiveness and sovereignty. That needs to also be taken into regard when discussing these issues.

Amandeep referred to the ten guiding principles. There was an eleventh one that was added a year after, which again supports the principle of IHL applying and that IHL and international law still stand strong for any weapons systems, including any type of new emerging technology and AI embedded into weapons systems.

I think for both of us, looking ahead at the time, we were hoping that this would definitely serve as a threshold. You used the words "building blocks," Wendell, which I think is very good, to capture what already then, in 2018, appeared to present rather significant conundrums and tension points.

I would like to mention a few other principles which I think demonstrate the foresight shown by the High Contracting Parties at the time and thanks also to Amandeep's chairmanship.

One is that IHL applies fully to all weapons systems. We talked about that already, but it was also the first time there was a deep recognition among Member States that it should apply for the entire life cycle of the weapons system, so not just the part where it gets deployed because then we—all of us who know who know a thing or two about AI—know that it is too late, so it was very much to look at that entire process, which means that you are dealing with quite different types of actors than you would normally in circumstances of discussing an arms control treaty, where you typically look more toward the latter phases, the deployment phase.

There is also a point about "responsible chain of human command and control." It does not speak about meaningful but responsible chain. You and I, Wendell, have had a few articles out where we questioned this notion of "meaningful human control" almost as a panacea when we deeply know that these technologies, once embedded, are not that easy to be controlled by a "human in the loop." What we were speaking about at that time with the High Contracting Parties we are speaking of was this responsible chain, what does that look like, and what needs to be done to put that in place.

This may seem strange to some of the listeners, but at that time there were not a lot of connections between discussions going on about cyber and discussions happening about lethal autonomous weapons systems. The principals also tried to bring this together by actually focusing on physical security, physical safeguards, and protection against hacking and data spoofing. There was also the risk of acquisition by terrorist groups. Lastly it also reflected on the risks of an arms race based on perception of capability, meaning the risk of proliferation also needed to be considered properly.

There is also a principle around the necessity of risk assessment and mitigation measures, our favorite topic, Wendell, ethics essentially: How do you embed that into the lifecycle of any form of discussions around these systems?

Amandeep's and my favorite point: One should not anthropomorphize these systems, which was very foresighted at the time, but now with ChatGPT we have seen the evolution of deepfakes and genitive image technologies that allow us to present technologies and systems with human features. "Anthropomorphize" basically means, "Attributing human features to a computer system." This is not typically a word often discussed in a military context, but it was very important at the time because the trend was already there, the temptation to do so in an imperfect environment—which a battlefield would necessarily be—and to also use this as a tool of deception was something to be addressed already early on.

I would say that there is a lot of foresight in these building blocks, foresight that I think, whatever form the discussion takes looking forward, should definitely make sure to integrate.

Amandeep, I have now spoken about my impressions from all of this, but I would love to have your comments to what I just said.

AMANDEEP SINGH GILL: I think you explained the principles and their substantive importance.

I would just like to return to this procedural impact, the principles-based approaches to digital governance. You see this now being taken forward in the United Nations Educational, Scientific, and Cultural Organization (UNESCO) with the recommendation on AI ethics, where you have similar principles in different domains; building blocks are described and policy recommendations given around literacy, capacity building, checking for biases, et cetera.

This model has spread somewhat, and that is a good thing, but we need to fill in those spaces that are still in a sense not clear enough in terms of how are you landing principles in practice, how are you sharing experience around those, what are you learning, and are you building some meta knowledge about governance through that? This is the next challenge, and that needs to be taken care of relatively quickly because otherwise people will get tired of principles by themselves if they are not brought into practice.

ANJA KASPERSEN: Amandeep, I had the pleasure of working next to you when you were chairing not only the CCW GGE discussions but also other treaty discussions. Many of our listeners probably know that you are also a published poet. I saw an experienced arms control diplomat sometimes bringing in poetry into arms control treaty negotiations, especially when it got really, really tense in the room. It would be very interesting for our listeners to also hear a little bit about your experience of actually overseeing a discussion that gets so tense as this would sometimes do in any form of arms control negotiations. What were some of the learnings that you took away from this, and what do you think is important that our listeners understand about being involved in such a process?

AMANDEEP SINGH GILL: That is a great point. I think the biggest learning was about transdisciplinarity. Having been trained as an engineer, I must admit that I also had a tendency to look at only the tech side of things, the power that technology bestows on its developers, but I think unless you bring the social sciences, the behavioral aspects, and the ethics side of it and come at it from different perspectives you are going to miss out on some risks.

Poetry is just one aspect of it. You move away from prose, which is very directing and which comes from certain areas of knowledge making. Poetry yanks you out of that and is a good device to force you back into almost like an I'm-not-taking-a-position type of stance and I am looking at it from different perspectives so that the right choices, the wise choices, can be made.

So transdisciplinarity certainly, and then dealing with health, which is such a sensitive issue at a personal level but also at a societal level. We talk about social determinants of health. You cannot just bring data and AI to health and solve problems of health coverage and access to quality health services. You have to do it from a transdisciplinary lens. That is again a big, big learning from those days.

ANJA KASPERSEN: For those attending the Summit of the Future, can they expect a few poetry sessions as well?

AMANDEEP SINGH GILL: I hope so, if we get the opportunity to. I won't be chairing, so I can pass on tips to the chairs and keep them in the right frame of mind.

WENDELL WALLACH: In our conversations together over the years, the two of you are very steeped in understanding the difficulties of multilateral processes whereas I function as, if not the embodiment of impatience, at least the expression that we don't have so much time to futz around with these technologies, that they get deployed very quickly, and they get entrenched in our societies in ways where it becomes almost impossible to reform or alter them after the fact.

Particularly when we are talking about autonomous weapons systems I think we are in grave danger. There have been many prognosticators who expect that if autonomous weapons systems have not already been deployed—it is sometimes hard to know whether they have or have not since the difference between a weapons system that has had human decision-making and machine decision-making may be the difference between software code and a switch that is not easily discoverable—it is suspected that they will be deployed in Ukraine, and it is not even clear which side will deploy them first. Both sides have perhaps been looking at them. This becomes a serious question of escalation and justification. Once they get deployed that becomes other countries' justifications for what they do.

I am constantly the person who is at least expressing that as important as getting all the details right these become challenges in which "getting it right" or perfect can be "the enemy of the good" or the enemy of just moving forward at a more expeditious pace.

I want to first ask you, Amandeep, and then you, Anja, how are we progressing so far in your mind in putting in place international binding regulations on the use of autonomous weapons systems?

AMANDEEP SINGH GILL: The secretary-general has been quite clear in his pronouncements from 2018 onward. He has "put his flag down" in a very clear area, and that is that machines should not have decision-making power over life and death.

That said, we have the intergovernmental discussion that is taking place in the CCW, and my colleagues at the Office of Disarmament Affairs are supporting that discussion. That discussion is moving along from principles to these building blocks and the sharing of understandings, but I believe that time is of the essence. Just like you, Wendell, I think this is a discussion that needs to move and needs to result in actions and outcomes that allow us to take that guidance that is done in the principles, in the pronouncements of the secretary-general, the pronouncements of many countries and civil society, so that—there are many theaters of conflict; the world is in a difficult situation today—we don't add this additional aspect of uncertainty of escalation. We have enough to deal with already with other weapons, weapons of mass destruction and cyberweapons, again areas where international outcomes are important. There are these 11 norms on cyberwarfare, but how do we land norms in practice?

There is a discussion on cybercrime taking place in Vienna and New York. These different aspects of the digital and security interface, whether it is AI, old-fashioned and not so old-fashioned cyberweapons, and cybercrime need to move along faster. Without that we will not have trust in the digital economy, without that we will have less trust across nations, and without that we will have more uncertainty around escalation and around proliferation to nonstate actors.

WENDELL WALLACH: What do you think, Anja?

ANJA KASPERSEN: Oppenheimer, when he was speaking about his role in developing the nuclear weapon, often spoke about how he regretted not having had sufficient foresight in explaining what the impact could be—societal, political, and economic. I think we have the foresight at this point of what the impact could be. I think what is lacking, as Amandeep was referring to, is an honest dialogue between states, which is very difficult at this point in time. I also think time is of the essence.

To my earlier point, I don't think we should throw out the hard work that has happened on the intergovernmental side of things, but I also think we shouldn't step away from the gas pedal in making sure that we move forward at the trajectory which is needed to mitigate harm.

WENDELL WALLACH: One of the things I am wondering in all of this, when we—that small body of scholars and policy planners who did have the foresight very early on about how problematic these kinds of weapons systems were going to be—first started talking about this, what I largely heard from the diplomatic community was, "That may be true, but we can't regulate it." We were talking about subtle distinctions between whether a system was humanly controlled or directed and whether its actions were at the initiation of its own software, its own powers of discrimination, or what it had.

I do sense that something has changed in the sense that it is not as if we know how to manage or have any degree of effective oversight of limitations on this kind of weaponry, but I am starting to feel that all of the military advisors and policy people are getting that the foresight that was coming from the scholars and a few of the policy people was right, that we actually can't humanly control these systems very well and therefore the dangers they pose are very high, even though the problems of verification are still there, the problems of how you might regulate these systems in an effective way are still there, and therefore they might get deployed no matter what.

ANJA KASPERSEN: I think one of the things that I find very encouraging—so let me respond to your rather more downbeat comment with something more positive, and I think Amandeep has seen the same thing—is the engagement from groups, communities, and transdisciplinary professionals, scholars, and everyone really, down to young people in schools. The engagement on this issue is quite remarkable for an issue that is as serious and as technical as this is.

I think the broad level of engagement is very encouraging, but as you said there are obvious challenges to making sure that we get the discussion to where it needs to be among states, and I think that is where we need to come together and ask if we are discussing this in the right type of forums and is it the right type of framing?

You mentioned verification as one issue. Amandeep obviously has a lot more experience on this, but on an issue like AI and emerging technologies verification will obviously be the most difficult thing to actually come up with something that works and also where any discussion almost needs to start. It is quite counterintuitive to a normal arms control instrument, where you will come to agreement on what you are trying to regulate and then you will come to an agreement on how you are actually going to verify that agreement. In the case of AI you can only agree on what you actually know that you are capable of verifying because so complex are the systems that we are building that verification becomes the big Achilles' heel of its effectiveness.

That is very difficult, which is why I go back to this point that it is very encouraging to see that we have all the right types of communities involved to be able to have that discussion if we open up for it. I am not sure what you think of this, Amandeep.

AMANDEEP SINGH GILL: I think that is right. Traditional arms control methods and approaches may not work in this space, just looking at it from a non-military perspective because the larger issue is the nature of AI, at least the current paradigm of what we call AI, and it is being deployed in many sectors.

In health again you have life-and-death types of decisions, and different regulatory bodies have started to regulate AI for health in their own ways. There are commonalities. The European Union has a risk-based approached. Others may have a different approach.

I think we need to come at it from different angles and drive this kind of diversity of AI governance experiences into a common area. As the reverse of that process it spills back into different domains again. For instance, if you are looking at certifying algorithms that are used to assess certain kinds of health risks and assist doctors and others in making diagnostic decisions, then the algorithm learns in defeat, and its performance changes. At what point does it come back for recertification?

The same thing is true with weapons. People in the commercial space may have proprietary data sets, they may have proprietary algorithms, et cetera. Likewise in the military space you will have those considerations of confidentiality, etc. There is no equivalent of a national regulator—the Food and Drug Administration or something like that—in the military space at the international level. At the national level, yes, there are weapons reviews for some if not all, but we need to find a place internationally where you can talk about these experiences.

Sometimes the duds and the mishaps that you have had with certain approaches, it may be in your own interest to tell others about. Again, reaching back into nuclear history, the PALs, the permissive action link, which kept nuclear weapons safe, it was a decision to leak them to the other side because it benefited everyone.

WENDELL WALLACH: When we all first entered this conversation there was this strong expression of commitment to international humanitarian law, and that seemed to be an underpinning of everyone's conversation. It didn't matter whether it was the Americans, Chinese, or Russians, they all gave a strong expression of the need to reinforce international humanitarian law in this space.

Now here we are. We have reached this point where there seems to be a general consensus that we need to regulate the deployment of these systems. There is also knowledge that it is not going to take place in the simple arms control and verification frameworks that have worked for other kinds of weaponry and therefore you probably need a higher degree of trust that everyone will follow the guidelines that are put in place.

Have we reached the point where putting those guidelines in place is becoming even more difficult? In other words, are we confronting an intractable problem largely in light of what we are witnessing as war crimes by Russia in Ukraine? Russia isn't quite the power that it was 20 years ago, but we nevertheless have one of the great powers that in international forums might want to give lip service to international humanitarian law, but it doesn't look like in practice they will follow it. If that is going to be the case with autonomous weapons systems, whether they are lethal or just damaging in other ways, then we may have an intractable problem in terms of whether we can actually put effective regulations in place.

AMANDEEP SINGH GILL: Wendell, military issues and security issues are no longer my remit, so I will refrain from going into that, but I would say just on IHL I think professional militaries all around the world have an interest in upholding IHL. They work with international law experts, they have certain kinds of protocols and trainings, et cetera. I think if we anchor the lethal autonomous weapons discussion in IHL there is a good reason. I think that is a basis for a common understanding. You are balancing military necessity with the humanitarian imperative, and you are bringing a humanitarian perspective to a technology issue. I think this should not be underestimated. This is quite an achievement.

I think what we need to focus on now is the relatively unglamorous work of building those conversations, tables of exchange, and designing them in a way that is a safe setting—the professionals are there, the transdisciplinary expertise is there—and some kind of sharing happens and is curated as common knowledge for the benefit of everyone.

The good news is that this is beginning to happen in other sectors. Here I am a little more confident in saying what I am saying. For instance, philanthropic organizations are taking inspiration from the World Health Organization's guidance on AI and data governance and the UNESCO's AI principles, and they are beginning to look at their grants to different regions, seeing how people are deploying AI for health, what are they learning in terms of governance and in terms of biases in data sets and data governance, and can we pull it together for everyone's benefit?

I think this is a movement that needs to be sustained in different fields. It may have collateral benefits for areas where it is relatively more difficult today to make progress.

WENDELL WALLACH: Before I take us into another subject area, Anja, is there something you want to say in respect to this matter?

ANJA KASPERSEN: I would have to concur with Amandeep, that this is no longer my remit either, so I agree with everything he said. We need better dialogues.

WENDELL WALLACH: Before we end this, Anja and I thought it would be nice if we talked a little bit about another subject. In our public space and with this audience we talk very much about technology governance and technology ethics, but among ourselves we often talk about self-understanding, philosophy, and spirituality, particularly Indian philosophy and spirituality, something that has interested all three of us for a long period of time, obviously naturally for you, Amandeep.

Given that serious influence in Indian philosophy and spirituality I wonder how it has influenced your focus on the development and deployment of emerging technologies, Amandeep.

AMANDEEP SINGH GILL: That is an important influence. It is a daily influence in some ways. The deep, deep wisdom that is there in Vedanta in the ancient Indian texts I find also in other cultures—Rumi's poetry, for instance, or what Jesus Christ said in his Sermon on the Mount. That deep, deep wisdom is common across humanity, but perhaps because I come from that culture it is easier for me to relate to Vedanta or what Ramana Maharshi has said about the nature of the self or what the Shiva Sutras and the Spandakarikas say about the nature of the world, the nature of the universe that we live in. In a strange way it is very similar to what I have studied in modern physics, in quantum physics. There are these interesting parallels between some of that deeper wisdom and what modern science is telling us today.

It has also taught me some humility. Although trained as an engineer with a lot of science in my professional career, I think we don't know enough. Humility is very important. There are "unknown unknowns," so we need to be careful.

I think the digital world is a nice mirror to humanity, AI in particular, because it claims to seek to emulate human intelligence. For us to understand what is human intelligence, what is being human, what is being intelligent, where you are being deluded or there is an attempt to delude or a risk of delusion, again what is real and what is not real? Those are things that strike you every day as you are dealing with technology and as you are dealing with AI and the implications of all that is happening around us.

It is a good touchstone, just like I mentioned human rights and the Sustainable Development Goals as a touchstone. This kind of philosophy—not to be deluded by what is not real, to look very deeply at what it is to be human and what it is to be intelligent—is helpful.

WENDELL WALLACH: Can I push you a little bit? In my understanding of you it is not just philosophy, but it is very grounded in who you are. It is embodied in who you are. It is not just the ideas you have. Is that fair?

AMANDEEP SINGH GILL: Yes. That comes through in terms of the approach to issues as well, the approach to multilateral negotiations and working with different types of perspectives. You are not stuck in those things at the level of the mind or your conditioning. You don't see others as "others." You see that as a side of you in a sense. You tend to work more let's say sincerely to build consensus. You tend to hold off judgment on issues for a little longer, to look at different perspectives before you come to some kind of decision. Those are things that become part of you.

I will say that anyone who is interested in art or poetry or loves nature will come to similar conclusions. It is not just philosophy, yoga, or meditation that gets you to that. It could happen in many ways.

WENDELL WALLACH: That was well put. I think the reason Anja and I both wanted to bring that out in this podcast—and perhaps we are going to try to find ways of bringing that out more in the future of the AI & Equality Initiative—is that with many of the people we see as taking leadership roles in these AI ethical issues it is not just about their concepts and their principles, but it is very much how they perceive who they are, how they find a grounding in this world, and how they navigate the uncertainties in a future that we can perceive that does not have easy answers in terms of which policies you may or may not champion or which kind of governance you may or may not champion. I am hopeful that we can find a way of bringing that out more as we develop this project, as we develop what we consider to be this community of practice.

AMANDEEP SINGH GILL: I enjoyed a webinar sometime back with Peter Hershock at the East-West Center in Hawaii, who is trained in the Buddhist tradition. We had people coming at the AI question from different ethical and religious or spiritual perspectives. That was quite enlightening.

I think there is a role when we say civil society and multistakeholders—in the United Nations we use this term called "faith-based organizations," and you have this tradition now in the Vatican of deep, deep discussion on AI ethics. There is the Rome Declaration. There is other work. I think there is a role for these perspectives to come into the policy discussions, not in the sense of ethics or the kind of, "Ah, another stakeholder to bring in," but in more of a paradigm-shifting way.

There are more powerful technologies on the way, whether it is the metaverse, and when quantum computing gets going then there will be such powerful shifts that what we have seen in terms of Moore's law and other aspects of these fantastic, powerful, powerful changes over the last 60 to 70 years will pale in comparison with what is coming, and if we are not stronger as humans, if we are not anchored in our humanness, if we are not wiser as beings, then it will be very, very difficult for us. So it is not a question of let's be smart about regulation, policymaking, multilateralism, and multistakeholderism, but it is also about how are we as humans, how are we as our children grow up. At the end of the day, the Summit of the Future is about the future, so youth voices and children, how are they being trained or how are they building themselves up as wiser human beings so that they can handle all that complexity?

ANJA KASPERSEN: Amandeep, before we conclude I would like to raise a few issues that I know you are particularly passionate about. One of the issues that you have been pushing hard for is a new type of discussion around "data commons." Can you elaborate on that?

AMANDEEP SINGH GILL: This is something that is about the misuse or the good use side of digital technologies. I think if we just focus on the misuse then we are missing something. We need to also bend the arc of investments and the arc of research into the direction of good use.

What is the best way to shift the thinking? My view is that we need to collaborate internationally and globally in a few critical areas such as health, education, the green transition, and agriculture and food security to build data commons where you bring together human capacity, trained human resource, you pool infrastructure or you build up collaborative infrastructure, and you pool data sets, you build gold standard flagship data sets. That catalyzes research and innovation in these specific areas.

For instance, building climate change-resilient crop varieties. Just as in the 1950s and 1960s the world got together and built science together that was critical, for the green revolution I think we need to use the power of data and AI to accelerate research and accelerate science in those kinds of ways. I call it "Son for AI," as Gary Marcus puts it, or call it whatever, I think it is important that we also focus in these areas where data can be leveraged to accelerate development.

It is not just about data. It is also about the human capacity around data because you have today by certain estimates 3.5 million missing data scientists in the SDG space so we need to create them, just as we have several million missing on the cybersecurity side. So this human element—how do we train public officials and others to not only be able to deal with data but also to use data to measure where we are on different SDGs, so the metrics of progress and also the data for innovation, and data for accelerating progress. How do we train people to do this and how do we empower them with infrastructure that is distributed and that is low-cost so they are not uploading their data to some centers in certain geographies and not being fully participative in the Sustainable Development Goals agenda?

If I say "commons," it means not just data but also infrastructure, also human capacity, and the global purpose that is provided by the SDGs. That is one of the areas where I think we are missing some action, and it is an opportunity again with the Summit of the Future next year to launch some practical initiatives.

ANJA KASPERSEN: If I may just push you on that, Amandeep, you mentioned human capacity and education and the need to invest in both fields. Are we in your view investing sufficiently in humans to interact with this data and also machine-based systems?

AMANDEEP SINGH GILL: Definitely not. I think schools everywhere and not just in the Global South, where primary education is important, need to bring in digital and data not in the sense of ICT education, not in the sense of putting laptops and tablets in the hands of children but also what is it to be "data-fied?" When you reduce something to data points what kind of nuance are you losing?

The thinking that is required around digital and data, not just in terms of learning coding or learning to use certain devices, that capacity building is missing today and needs to be upscaled. As I said, we need more trained data scientists and AI specialists in the development space.

WENDELL WALLACH: I know you have some creative thoughts on education. Perhaps you can say a bit about that.

AMANDEEP SINGH GILL: Right, education in the sense of capacity expansion, in the kind of Amartya Sen/Kenneth Arrow/Toyama type of understanding of human capacity, where you are able to do more with what you have. I think that in the digital age—again, leaving aside ICTs and education—requires a new way of approaching education. It requires adjustments to our curriculum. It also requires schools to be collaborating more across borders and regions and teachers to be collaborating more with case studies, experiences, content, et cetera. That can be digitally enabled. We saw during COVID-19 how learning experience platforms were helpful in bringing teachers together in ensuring continuity of education.

What happens after COVID-19 now? Can we, as we look at the SDG's agenda and as we look at engaging youth, go beyond youth participation in meetings to aligning the international agenda with education systems?

Education at the end of the day is also very political. It is very touchy and goes to the heart of national sovereignty issues, but I think there is also an international dimension today with these global challenges around climate change or digital technologies. Can we think about education as also an instrument of better policymaking on digital?

Let me explain. If we can prepare our citizens to be more mindful of the challenges, then instead of recruiting a million content moderators or spending billions on AI that looks at content moderation we have less of a challenge to handle in terms of self-harm or misuse of data in some ways because we have citizens who are "fit for purpose" in a sense for the digital age. They are less likely to be deluded.

We need to see shifts in the education system, and this will involve parents, school administrators, teachers, government regulators, and all of us. You saw a little bit of that discussion during the Transforming Education Summit last September here in New York. This is the beginning of the story. This has not gotten enough attention, but it is very, very important.

ANJA KASPERSEN: Are we doing enough to protect children, because you said how to educate, how to bring the educators and regulators onboard to make sure we educate people to be able to engage with digital goods, with common goods, with data, and with machine interfaces? Are we doing enough to protect those who may not be at an age where they have the same protections in place? I am talking now specifically of children. I know the United Nations Children's Fund has been working hard on this, organizations related to IEEE have come up with age-appropriate design standards, and there are many other organizations working on this. Are we doing enough?

AMANDEEP SINGH GILL: The blunt answer is no. There are huge gaps. This is not only about online sexual abuse or online exploitation of children, but it is also about all kinds of harms. There is illegal content, there is illegal stuff, but there is also harmful content, harmful stuff. What happens to sleep? What happens to deeper thinking around issues, self-image, and the mental health issues we have seen with posts on social media in the United Kingdom? Those are considerations that are behind the legislative effort that is underway.

We are certainly not doing enough. There is greater awareness. The social media platforms are under pressure to respond, but with what we see now, the surge in terms of content that is there, live video streaming among other things, our existing investments are not good enough.

We need to devote more resources. There is not only a need for clarity in policy, legislative frameworks, the work that IEEE and others have done around age-appropriate design, but there is also a need for more resources. Those who are benefiting from more users on the platforms, often users who are below the age of 13—although platforms claim that they have ways to check that users are the right age, it is not working and we know it—need to put more resources into child protection online.

They need to work more closely with civil society groups, academics and others, so that you can also do research so in a live sense you can feed back into how you are running the platforms. They also need to work more closely with governments to ensure that the law is being followed not just to the letter but also to the spirit.

ANJA KASPERSEN: If I may, you spoke earlier about the importance of spirituality and the importance of being grounded in who you are, which is obviously something we are saying that with the digital age it may be something we are actually taking away from our kids, to be able to form that sense of self. As the United Nations technology envoy but also as a father, for those listeners who are parents and who are grappling with "How do I do this, it is very overwhelming, how do I even start engaging with my kids in such a way that I can give them tools to ground themselves and at the same time navigate digital goods for the positive benefits that they also provide," what will be your recommendation or insights that you can share with them?

AMANDEEP SINGH GILL: In a very humble way I think as parents we need to spend more time with children, be more present in their lives, not in an obsessive, heavy way, but just to know their first girlfriend, first boyfriend, what happened at school—just being more present in their lives. That is good insurance against some of these online harms, making sure that your child is not feeling lonely or left out at school just by being present and having those regular conversations.

Then working on the digital literacy side—how they are using apps, how they are registering things—making sure again that as you are present in their lives you have some kind of presence in their digital life and that you can step them up in terms of responsibility for what they do online with themselves and with others because often victims can be perpetrators in terms of cyberbullying and other things.

Also I think being more engaged with teachers and other parents. You might have the right atmosphere at home, but you have no control over how others are behaving and what that impact has on your own children, so being more engaged with the community around schools. I think we are losing a little bit of that. There is too much outsourcing of education and raising of children to these professional institutions, which do a great job, but I think at the end of the day you can't outsource it as a parent, and digital parenting is different from parenting parenting. We have to step up to that.

The last thing I will say is that at the end of the day this is not only about the head because tech is often as seen as a mind product—if you are smart and writing good code, it is all coming from the head, and from the head you can change the world. No. You need to bring the heart into the equation. You need to bring the hands into the equation. With children if the interplay of the head, hand, and heart is fine, then we don't have to worry about the human future in the digital age. They will take care of it well enough, but if we are going like lemmings into this deluded thing and everything is coming from the mind—code is law, and code will solve everything—then we have a problem.

I think this is a challenge for all of us as parents, spouses, siblings, and friends to be more in the analog world, more with what is around us and with us. It is very attractive spending time onscreen. There is so much there. You can learn, you can be entertained, etc., but balance there. Sometimes you are better off cooking something in the kitchen, calling up a friend, or going for a walk.

WENDELL WALLACH: Thank you ever so much, Amandeep, for sharing your time, your insights, and your expertise with us. This has truly been a rich and thought-provoking discussion.

Thank you to our listeners for tuning in, and a special thanks to the team at the Carnegie Council for hosting and producing this podcast. For the latest content on ethics and international affairs, be sure to follow us on social media at @carnegiecouncil.

My name is Wendell Wallach, and I hope we earned the privilege of your time. Thank you.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

DEC 9, 2021 Podcast

Ethics, Governance, and Emerging Technologies: A Conversation with the Carnegie Climate Governance Initiative (C2G) and Artificial Intelligence & Equality Initiative (AIEI)

Emerging technologies with global impact are creating new ungoverned spaces at a rapid pace. The leaders of Carnegie Council's C2G and AIEI initiatives discuss ...

NOV 17, 2021 Podcast

AI & Warfare: Are We in a New "Horse & Tank Moment"? with Kenneth Payne

Will AI systems transform the future battlefield so dramatically that it will render existing paradigms and doctrines obsolete, feeding new intense security dilemmas? In this "...

JUL 7, 2021 Podcast

Soft Law Approaches to AI Governance

In this Artificial Intelligence & Equality podcast, Senior Fellow Anja Kaspersen speaks with Arizona State's Gary Marchant and Carlos Ignacio Gutierrez about their work characterizing soft ...