ICGAI: Meaningful Inclusivity in Governing the AI Revolution

Apr 19, 2021

Don't miss Session 2 of the International Congress for the Governance of Artificial Intelligence (ICGAI) online speaker series! This event focused on "Meaningful Inclusivity in Governing the AI Revolution." The session includes insights from high-level experts and decision-makers on the key stakeholders in achieving effective AI governance, the necessity of meaningful inclusivity, and how we can stimulate cooperation as we navigate the challenges posed by emerging technologies.

MICHAEL MØLLER: Welcome, everybody, to the second session of the International Conference on the Governance of Artificial Intelligence (ICGAI): "Meaningful Inclusivity in Governing the AI Revolution."

This is the second of two so far in this series. We had our first meeting on March 23, where we looked at catalyzing cooperation and working together across governance initiatives. It was a very good session and very productive. We received quite a lot of input from a number of major international artificial intelligence (AI) initiatives, including that of the UN secretary-general and his Digital Roadmap, the Organisation for Economic Cooperation and Development (OECD), the Institute of Electrical and Electronic Engineers (IEEE), we also had the Chinese perspective on international governance of AI.

We focused upon creative means governing fast-moving emerging technology deployment and to reform multilateral governance, and we underscored the challenge of instilling trust in governance and the criteria for establishing such trust. We looked at the challenges posed by emerging technologies and asked if they can be effectively governed through reforms to existing institutions or if new governance mechanisms might be needed. Today we will reflect upon inclusivity and multistakeholder input into the international governance of AI with two panels and a conversation.

It is my great pleasure to hand over to my co-chair, Nanjira Sambuli, who as most of you will know is a tech policy and governance analyst. She will give a short introduction on the subjection as she starts her moderation of our first panel: "Who are the Key Stakeholders?"

Our second panel, "How Do We Engage Specific Stakeholder Groups?" will be chaired by Anja Kaspersen, a senior fellow with the Carnegie Council of Ethics in International Affairs (CCEIA).

Nanjira, over to you.

NANJIRA SAMBULI: Thank you, Michael, and hello, everyone. It is a great pleasure to have you join us today.

I will just make a quick remark about the conversation we are about to have on key stakeholders. As many of you will know, inclusivity and inclusion is a tricky matter when it comes to our diverse makeup as a global population, so as we go into the first panel that will look at some presentations and key stakeholders I do encourage you to think about which groups may not as yet be represented. Please share your thoughts via the Chat function. Let's hold space for those community groups that we may not necessarily bring to mind immediately and make sure that we take stock of who else should be involved in these conversations. We will have five presentations, and hopefully we will have a quick Q&A with the speakers before handing over to Anja to chair the second session.

Without much further ado, I will hand over to the first presenter, Zlatko Lagumdzija. Zlatko is a professor at the University of Sarajevo and is a former prime minister of Bosnia and Herzegovina, and is a member of the Club of Madrid.

Zlatko, the floor is yours.

ZLATKO LAGUMDZIJA: Thank you, Nanjira. First, I want to thank you, Nanjira, for such nice words, and of course Wendell and Michael. I will stop here because otherwise if I want to thank everyone, four minutes is not enough.

I will talk a little bit about what we did in the Club of Madrid with some of my friends and related things that are perfectly fitting in this topic of meaningful inclusiveness in governing artificial intelligence from the perspective of something which we designed as the Club of Madrid together with the Boston Global Forum, which is called "Social Contract for the AI Age" proceeding to an artificial intelligence international accord and hoping to end in something which will be some kind of platform which I call the International Artificial Intelligence Agency (IAIA).

Multistakeholderism is definitely the key pillar of these thoughts that I am going to share with you. It is not about only how many ideas any one of us has, but it is more about how many of them we ultimately make alive by working together with other stakeholders along with shared vision and shared values. In order to make our ideas alive, I think we have to work on their refinement, but those endeavors have to be accompanied with establishing the proper governance structures.

This multistakeholder approach, which includes individuals and groups, governments, business entities, civil society organizations, academic communities, and artificial intelligence assistances, as we call it, to digital policymaking is pushing inclusion of stakeholders one step further in the direction of establishing a just and proper global governance network for artificial intelligence.

Just to remind you that eight decades ago physicists were not deciding what to do with atomic energy. I saw in the previous panel on the 23rd, which I watched very closely, that some of the speakers were making a strong correlation between lessons learned from the United Nations, multilateralism, and the things that we did with space and especially with atomic energy. Now I think we are in a position that computer scientists are not the ones who are going to be deciding exclusively about what will happen with an invention like artificial intelligence, just like physicists did not have the exclusive right to deal with atomic energy.

To put it from another perspective, like "War is too serious to be left only to generals," in the same way artificial intelligence is too serious to be left only to computer scientists. I happen to be one, so I am speaking also as a computer scientist. And the other way around as well.

So, having in mind the lessons from our recent history in the United Nations dealing with atomic energy, I will just give you a few remarks about global governance of artificial intelligence to remind you that, back in 1946, the UN Atomic Energy Commission was founded by Resolution 1 of the United Nations General Assembly in order to "deal with the problems raised by the discovery of atomic energy."

That Commission was officially disbanded six years later with the extended threats of the Korean War, the nuclear race, and Cold War dynamics. It took the United Nations from 1946 to 1957 to come to something called an independent international treaty-based International Atomic Energy Agency (IAEA). The IAEA is an international organization that seeks to promote peaceful and prosperous use of nuclear energy and inhibit its use for any military purposes, including nuclear weapons, reports both to the UN General Assembly and Security Council with a membership of 171 out of 193 UN Member States.

So today I think we urgently need a leadership call, and I see our gathering as one of the leadership calls from this profile of people for establishment of an IAIA—International Artificial Intelligence Agency—as a next step to understanding the governance of the new AI and Internet ecosystem for work and life. So using the same UN words defining the International Atomic Energy Agency, we can say that the IAIA is the "world's central intergovernmental forum for scientific and technical cooperation in the AI field. It works for the safe, secure, and peaceful use of AI and digital governance, contributing to international peace and security in accordance to the Universal Declaration of Human Rights and to the UN Sustainable Development Goals (SDGs)."

I hope it will not take ten years to get from the Commission to a proper agency where we deal with artificial intelligence, like it took us ten years to come to some kind of functioning body when it was dealing with atomic energy. I see today our discussion as some kind of call to leaders to show the wisdom in the interest of using science and technology fruits in order to move our plans in the right direction.

I will stop here.

NANJIRA SAMBULI: Wonderful. Thank you, Zlatko.

ZLATKO LAGUMDZIJA: You know, university professors are 45 minutes-plus always.

NANJIRA SAMBULI: Indeed. Well, thank you for giving us quite the opening talk on this matter. It will be very interesting to discuss more whether the call is for new agencies or to reform existing ones. We can revisit that conversation.

Up next I will invite Trisha Ray to join us onscreen. Trisha is an associate fellow at the Observer Research Foundation and co-chair of the Sci-Fi Conference.

Trisha, the floor is yours.

TRISHA RAY: Thank you so much, Nanjira. It is a pleasure to be part of this wonderful session and panel.

I will be focusing my intervention on one key set of stakeholders. We need global tech giants and how the fact that they are primarily driven by the laws, bioethics, ethics, cultures, and even the dominant language of the country they are based in affects the development and the governance of AI.

The first area in which we see this is most fundamental, data. The dominance of tech giants based in a small handful of countries means that many low or middle-income country governments won't have access to their own citizens' data for developing AI applications and also for use by home-grown entrepreneurs. This is especially a concern for countries that represent smaller markets and thus may not have the leverage or be able to build momentum or even the necessary institutional structures to enact and then enforce legislation to bring this data into their innovation ecosystem.

The second area is diversity and inclusion (DI). Technology giants may commit to fostering an open and inclusive environment, but Big Tech's approach to diversity is often limited to silos as opposed to consistent efforts to diversify across programs and seniority levels, and I mean diversity in terms of national and multidisciplinary perspectives as well. There is little exploration of how policies made in corporate headquarters will conflict and interact with the social, political, or economic contexts of other countries.

My third point leads to agenda-setting power. The way many international multi-stakeholder forums are designed prevents the meaningful participation of communities and favors or is conducive to participation by those who have the most resources, so there are barriers like the language of the proceedings, the resources required for sustained presence at these forums, including funding and infrastructure, that become barriers. I am thrilled that the AI Congress is focusing on broadening stakeholder participation, and I hope we keep this in mind for future generations as well.

This leads me to the final point, which is of cultural erasure. An example would be that English is the dominant language of the global Internet. While there are growing efforts to digitize other languages, this also means that languages that are, say, difficult to digitize will get left behind, and then the choices that platforms and these big data farms need about which languages they want to invest resources in will further marginalize certain languages and therefore certain communities in the development of AI applications.

I will conclude my remarks there. Thanks again to Wendell and Nanjira. I am excited about what we are building here, and I look forward to everyone's comments. Thank you.

NANJIRA SAMBULI: Trisha, thank you so much for bringing that important perspective, and I hope we can revisit any examples of good practices, meaningful community participation, and multistakeholderism.

Up next I will invite Cordel Green, the executive director of the Broadcasting Commission in Jamaica.

Cordel, the floor is yours.

CORDEL GREEN: Thank you very much. I would like to raise four points for consideration.

The first is that powerful nations are invested in a new AI arms race, which is focused on profits and power. In the result, international cooperation appears to be giving way to AI nationalization. I think in that scenario the existing multilateral system will likely not be sufficient for meaningful inclusive AI governance, and we can see the deficiencies in the multilateral system playing out now with vaccination distribution.

The second point is that at a personal level it appears that people are more trusting of relationships than they are of institutions. It is an opportunity to embrace direct democracy and put ordinary people at the front and center of governance, particularly the youth. After all, we are in the modern age of the masses, as demonstrated by the Me Too, Black Lives Matter, Occupy, and other such movements. So I make a case for considering a movement approach as a way to encompass diverse stakeholders.

The third point is for us to consider working through a network of the existing broad range of organizations, individuals, and stakeholders who are already focused on the Sustainable Development Goals. I suggest this approach because the SDGs are meant to address some of the types of AI risks that we are most concerned about—discrimination, biosuppression, and exclusion.

Also, AI is not a widely familiar topic of interest, and unlike HIV/AIDS, cancer, or climate change there are no broad-based and influential grassroots organizations dealing with AI issues. The concerns affecting the most vulnerable will not be addressed without that type of civil society mobilization. This has got to be a governance priority.

The final point is that if there is no equitable access to data, there will be no real inclusion, simply said. I therefore suggest that one way for AI governance to be inclusive would be to champion a principle which treats data as we do air as distinct from oil. One is freely available, albeit polluted, the other is proprietary. I admit this idea for some will be as transformational as it will be for others preposterous.

I will stop there. Thank you.

NANJIRA SAMBULI: Indeed, Cordel. Let's see if there are comments that come up about the question of making data a commons essentially.

Over to Ms. Eileen Donahoe, who is the executive director of the Global Digital Policy Incubator at Stanford University and is with the FSI/Cyber Policy Center also at Stanford.

Eileen, the floor is yours.

EILEEN DONAHOE: It is perfect that I am following Cordel and his comments about the deficiencies of the multilateral system and the fact that ordinary people should be at the center of AI governance.

I am going to briefly talk about how the international human rights law framework, which already serves as one of three core pillars at the United Nations, can still provide an excellent foundation for governance of AI and also about how the international human rights community thinks about meaningful inclusion and who should the key stakeholders be.

The first point is that, as I see it, the animating energy behind this whole exercise is a shared recognition that this AI revolution requires governance innovation that meets and matches the digital transformation that we have experienced. The AI revolution is an inherently globalizing force, and the effects are inherently transnational and felt across borders, whether it is from inclusion in or exclusion from the AI revolution, and those effects cannot be handled solely by sovereign nation-states within domestic jurisdictions and regulatory frameworks.

This is where the international human rights framework comes in. There are several features of that existing framework that make it very well suited to international governance of AI. I will mention three.

The first is that human rights already are internationally recognized norms that have the status of international law. The human rights framework was negotiated in multilateral and multistakeholder processes in the aftermath of World War II, which was a crisis of global proportions that we really do not want to ever repeat. Stakeholders from around the world spent years drafting the basic texts of the International Bill of Human Rights, which now can be seen as founding documents of the international order.

Human rights constitute a shared normative language of the international community. While we have lots of work to do in articulating how to apply and adhere to those rights in an AI-driven context, we don't need to start from scratch and develop a whole new set of principles. That is point number one.

The second point that makes the international human rights framework well-suited to international governance of AI is that it already speaks to the whole spectrum of risks and challenges that are associated with AI. The human rights framework starts with recognition of the centrality of the human person to governance, and it aims at protecting human dignity and the inherent value of the human person.

Many of the concerns we have already heard and that have been raised that are associated with AI relate to various dimensions of human autonomy, human agency, and human dignity, and these are things that are explicitly referenced in the founding human rights documents, for example, concerns about the right to privacy being eroded from collection of personal data that feeds AI; concerns about lack of transparency in apparently opaque systems that undermine fairness and the right to due process; concerns about ubiquitous surveillance facilitated by AI that directly impact the rights to freedom of assembly and association; and also how algorithmically driven information platforms and ranking systems can impact free expression, access to information, and even the freedom to form opinions. These are all rights that are explicitly referenced in the founding human rights documents.

Many other concerns associated with AI relate to meaningful inclusion, also addressed in the human rights documents. First and foremost, significant concerns have been raised about lack of inclusion in the economic upside and financial benefits of AI. We also have risks of AI that exacerbate digital divides, wealth and income inequality, loss of jobs to machines, and all of these things are covered in the International Covenant on Economic, Social, and Cultural Rights.

The last point is about key stakeholders. From a human rights point of view, human beings are the key stakeholders. The framework puts the primary obligation on government to protect the human rights of citizens, and it also recognizes the role and responsibility of the private sector in respecting human rights, but from the civil society point of view and the human rights community point of view, civil society must also have a seat at the table to protect the rights of citizens in this growing ecosystem.

I will stop there.

NANJIRA SAMBULI: Thank you, Eileen. It will be interesting to see if we can come back to the question of what are the contention points in implementing the human rights framework and AI governance universally.

Last but not least for this session, I invite Yi Zeng, who is professor at the Institute of Automation at the Chinese Academy of Sciences and a fellow at the Berggruen China Center.

Yi, if you may please join us on camera. The floor is yours.

YI ZENG: Thank you. It has been an honor to hear. I am an AI scientist, so I was glad to explore the topic from the perspective of a technical developer, I would say.

The first point that I would like to raise is about the next generation. When we are talking about the technology impact to the current generation, we are very uncertain ourselves. When it comes to the next generation, you could say, "Oh, they are not mature enough, and maybe we could make decisions for them as parents."

Then you yourself are not really sure where we are going with the next generation of AI or its impact to the society, so how can we get a good decision for the next generation? As a father myself, I am scared to make decisions for my own children for their future on what kind of impact AI services might have on them. That is my first concern.

As a technical researcher myself, I am also worried about AI scientists, developers, and designers, like myself. The reason is that in the national governance committee of new generation AI in China, when we develop the principles, we have responsible AI, so people were asking: "Who is really responsible? Are you expecting the AI to be responsible, or are you talking about AI actors to be responsible?"

Then people argue: "Of, course AI actors need to be responsible."

Okay. Then we go to developers and scientists, and then you see that people claim that "Oh, we totally comply with the privacy regulations," let's say with the General Data Protection Regulation (GDPR), where we have data erasure. Then some companies argue that they are 100 percent compliant with GDPR.

My idea is that no one is really compliant 100 percent with GDPR. The reason is that talking about data erasure, it is really easy to erase the data from a database, but it will be nearly impossible in current AI models to erase the user's features and their data from the trained AI model. That means from the technical perspective, we don't really have a solution for now.

Another perspective is that when people are talking about federated learning they say, "Oh, we don't need your specific data, so your privacy is preserved." The true story is that they train their model by using a version of the data where they don't tell you what they use the data for. In this case they don't have proper informed consent.

So you see from many of the perspectives in technology, we are not ready to provide a more responsible AI to society, and in many cases people choose not to talk about it, especially for technical researchers, including myself. In this case I would say from the developer's and scientist's perspective of course we are stakeholders, but we need to be really responsible to make the next generation of AI safe and ethical not only for us but for the next generation that is the future for humanity and the earth.

I will stop here.

NANJIRA SAMBULI: Thank you so much, Yi, and please do stay on camera as I invite all of our other speakers to join us for a quick conversation.

Thank you so much for bringing a very unique perspective to this, and I would love to start with a question to you directly. How do you think next generations—and I am thinking here even of the ones yet unborn—how can we start having them considered as stakeholders even in the technical work like you are discussing? In Wales, for example, they have a Future Generations Commissioner, whose job is to imagine and represent the unborn generations in government decisions. I would be curious what your thoughts are about how in the technical community that you are speaking to, these can be the kinds of perspectives that shape the workings, the "bits and bobs" as they are coming together.

YI ZENG: Thank you. We must be talking by concrete examples. I will give one quick example. It is emotional recognition in the classroom, where we found it in China mostly a year ago when pattern recognition was used to recognize whether the students were focused or not in the classroom. Technology researchers were enthusiastic because they said: "Oh, we can actually recognize whether they are focused or whether they are doing something special in the classroom." This is a challenging technology scenario.

Some of the parents said: "Whoa. It really helps. Maybe you can use it," but basically no one liked it. In China, the Ministry of Science and Technology said to stop it.

People never learn the lesson because a month later Carnegie Mellon developed another version. They said: "Oh, we are not only using emotion recognition. We have gait recognition, that you can recognize your gestures."

For those developers they are excited to have this kind of trying without consideration of what it means to the next generation from the perspective of—let's say it's 15 years later and my girl would like to go to college. Maybe I would suggest: "Oh, maybe go for artificial intelligence. It is very exciting for me."

Then she will be like: "Oh, no, no, no, Dad. This guy has been spying on me for more than 15 years, and you want me to select it as a profession to pursue to spy on others? No way."

That is the long-term impact to the next generation where the developers and the parents are not sure about what does it mean to the future generation. That is killing it, I would say.

Both the parents and the governments are not ready. We have to be quite careful. We were careful, but it is not enough. We need to think about not long-term AI; it is the long-term impacts to the next generation, what could happen to them.

NANJIRA SAMBULI: Yes, absolutely. It is not lost on me, for example, that some of the tech giants' children are spared these experiments with such technologies.

Cordel, let me come back to you on this same topic around young people and speaking of those who are presently young—and others can also chime in on this point. You spoke about including young people in governance. In my experience this is usually said to mean involving them in the deliberations, so having them in the room but not at the decision-making table. You also spoke about movement building. I would love for you to expound a bit more on how we can take that into practice so that we go beyond just name-checking youth and other typically excluded groups.

CORDEL GREEN: I fully appreciate what you are talking about. I am not talking about tokenism, which goes beyond youth. It very often involves and is replicated when you talk about self-inclusion and so on. When you dig very deep you have to ask: Is it tokenism or real involvement in decision making?

This is why I like the kind of networked movement approach because the leadership is less centralized and you can empower nodes. There are nodes that can include youth, who are involved in real decision-making. They are involved in university systems and in youth leadership, but they usually are brought into the frame through a process of "consultation." I am not now talking about consultation, I am talking about real and deep engagement.

To Yi's point I see a connection with what Eileen was saying about the human rights dimension because one of our primary responsibilities to the children we bring into the world is to help determine the health of the world we are going to leave to those children. They have to participate in the process, but adults have a role to play. It is not an infantile society in which we live.

So I take her point and would extend it further to say that if we examine the Sustainable Development Goals, if you look at Goal 1, ending poverty in all its forms, including equal rights to new technology; Goal 8, about decent work; Goal 9, about promoting inclusive and sustainable industrialization; Goal 10, reducing inequality and promoting inclusion of all; Goal 16, promoting peaceful and inclusive societies. If those are not just words and they are to mean something, then I, like Eileen, believe it is good to embed it in the human rights framework.

But I strongly support tethering AI governance to the SDGs because that is the safest way in my view to bring in civil society and to avoid the centralization of state decision-making, which is very much influenced by Big Tech, with whom sometimes they are very much aligned on issues of national defense and security, AI-driven surveillance, and control systems.

NANJIRA SAMBULI: Thank you so much.

Eileen, back to you since you have been invoked already. It seems nowadays it is better to just say I think everybody agrees we have to use the universal human right framework as the point of departure. How to go about it? What are your thoughts about what is keeping us from actually implementing that as a framework to govern AI and digital technologies broadly?

EILEEN DONAHOE: I think that interestingly many in the technology community are not actually very exposed to the international human rights framework. It is a basic problem there.

I am going to comment on the others' comments. Yi Zeng, your comment about youth. To me the concerns expressed by your own daughter relate to that inherent sense of human dignity, human agency, and the desire for human autonomy, and the risks associated with AI that threaten those instinctive needs. I appreciate you commenting on the fact that the next generation is aware of those things instinctively.

I will also say that on the topic of meaningful inclusion we are all aware of the issue of needing to include people on the economic side, and Cordel just re-raised that in conjunction with the SDGs. We are all aware of the need to include diversity in data and the risks associated with failure to do that.

But the harder part that you raised earlier is: How do we make sure that the coding community—coding is a form of governance, and that community also needs to be more inclusive. The hardest part is meaningful inclusiveness in the governance community, in the policymaking, in the decision-making community. That is where the rubber meets the road.

Here I will just a say a closing point to Cordel's point. Multilateralism is no longer going to cut it in the international realm. Multistakeholderism that includes only the private sector and governments is not going to cut it. Civil society has to be there to represent the voices of people as the key stakeholders. Otherwise we have a democracy deficit in governance of AI, so it has to be a very rich notion of multistakeholder processes.

NANJIRA SAMBULI: Thank you, Eileen.

Trisha, you raised a very excellent point about DI as a siloed effort, and I think that links to some of the points that have been raised about none of these groups either being monolithic or civil society as one big group. I am curious about your point around meaningful community participation and trying to buck the trend on cultural erasure. Are you seeing any emerging good practices involving folks in proactive governance engagement over and above the typical, "We make noise from the rooftops and with placards outside the doors of power," where we are seeing practices trying to walk the talk?

TRISHA RAY: A lot of the work being done to prevent what I would label as cultural erasure in relation to AI has been local stakeholders. For example, speaking from my perspective, my mother tongue is Bengali. It is one of the five most spoken languages in the world, but the script is notoriously hard to digitize, so universities have been working to digitize it for the last 20-odd years. But it is this slow progress by those stakeholders who are most affected by this issue that has gotten the language to where it is now.

Another example of more scattered but local approaches is that there are studies auditing multilinguality assets online. Through these audits you find that there are certain high-resource languages that fare well in these data sets in terms of accuracy, but certain languages, especially—from a paper that I read recently—African languages that don't do well in these data audits. Identifying these gaps then helps provide a space for local stakeholders to start initiatives in this area.

NANJIRA SAMBULI: Thanks, Trisha. I do know that part of projects with African languages have been trying to use voiceover and above text to try to bring that into the corpus.

Zlatko, last but not least, obviously because I wanted to come to you to see if all of these considerations figure within your IAIA proposal. How do all these groups come through? Also, a specific question to you about the powers proposed for such an agency, and I would add: Who would be the power holders?

ZLATKO LAGUMDZIJA: I think a very important thing is that when we talk about multistakeholderism, we have to be as open and as inclusive as possible. We cannot let's say by a multistakeholder approach neglect the role of the governments or business community or whatever. There are no doubts that today the governments are in the driver's seat in today's organized world. Multilateralism generally is under big, big, big waves of changes, and the world is looking for the new ways of restructuring the governance of the global issues in general.

You are absolutely right. I agree with the concept that we should connect as much as possible artificial intelligence with SDGs. That is one thing, because SDGs have the same problem in a certain respect as artificial intelligence. I will give you an example of what I am talking about.

What are today's biggest problems on the globe? Health. Who do we have? The World Health Organization (WHO). Education, we have the United Nations Educational, Scientific, and Cultural Organization (UNESCO), we have the United Nations Children's Fund (UNICEF). They are dealing with education. They are not the only ones dealing with health and with education, but there is a global multilateral player in charge of that coordination, promoting the platforms and spreading the culture and creating an environment that is friendly for that.

Today the economy is the big global problem. Who do we have multilateral for economy? We have the World Bank. We have the International Monetary Fund (IMF) and a series of Bretton Woods-related organizations.

The next one is atomic energy. Who do we have for atomic energy? We have the International Atomic Energy Agency. Fine.

Sixty years ago we had the Apollo moonshot as a big generational thing. What is the Apollo moonshot project of our generation? I see two things. One is artificial intelligence, and the other is the SDGs.

I think we need two let's call them "platforms" or organizations like the WHO, like the IMF, like the International Atomic Energy Agency, that will be dealing (1) with SDGs, let's call it ISDGA; and (2) IAIA.

I used the same wording when I was promoting IAIA that is used in UN General Assembly and Security Council decisions about the role of the International Atomic Energy Agency. It says, "The Atomic Energy Agency is the world's central intergovernmental forum for scientific and technical cooperation in the atomic energy field. It works for safe, secure, and peaceful use of atomic energy"—or for our purposes artificial intelligence and digital governance—"contributing to international peace and security in accordance to the Universal Declaration of Human Rights and to the SDGs." Voilá. Here we are. The same definition as with the International Atomic Energy Agency.

But as we said in this Social Contract that we just launched back in September—there were ten of us, Madame Vīķe-Freiberga, former president of Latvia and president of the World Leadership Alliance. I was there with Governor Michael Dukakis and with two or three professors from Harvard and the Massachusetts Institute of Technology, and some people from Google who were involved in this Congress.

Our concept of a Social Contract for the Age of Artificial Intelligence is based on the fact that it has to put together all stakeholders and power centers. The first one is individuals, clients, and groups, and they should be working on data rights and responsibilities, Internet rights, education, political participation, and responsibility.

The second one is governments. We cannot avoid them. I used to be one over there, and I know that I could not avoid myself when I was there, and I cannot avoid the governments when I am not in a government.

So we have to make pressure. Politicians are dealing because of their programs but also based on pressure. We have to press governments all over the world to collaborate between themselves and collaborate under UN international organizations.

The third stakeholder in our Social Contract in the Age of AI is business entities, civil society organizations, and communities, and finally artificial intelligence agents like IEEE and other organizations that are dealing with such things.

My point is very simple. We have to be inclusive, and we should not say governments or us, multilateralism or multistakeholderism. No, no, no. We cannot get rid of multilateralism. Otherwise we will be utopians, and we could be Luddites who are dealing with artificial intelligence by taking hammers and breaking them so we don't go in that direction. We have to be inclusive. Include the governments as well and make them be part of us, not being against us.

NANJIRA SAMBULI: Wonderful. What a wonderful way to close it and to center us around the understanding that we are talking here about governance that goes beyond governments but is enforced by governments.

Ladies and gentlemen, please raise your virtual hands for this excellent panel of speakers. Thank you so much to Trisha, Zlatko, Cordel, Eileen, and Yi Zeng.

Now I will hand over to Anja, who will chair the second panel that will dive deeper into how we engage different stakeholder groups. Thank you once again, everyone.

ANJA KASPERSEN: Many thanks to you, Nanjira, and the speakers and participants in your panel for a really interesting discussion.

As Nanjira said, we are meeting, albeit virtually, today to emphasize the importance of inclusivity when discussing future governance of AI-based systems and adjacent technologies. Thus the session today kicked off with interventions on who are the stakeholders that need to be included and the dialogue around AI governance to ensure meaningful inclusivity.

We are now transitioning to an equally important theme: How do we engage specific stakeholder groups? Do we currently have the right platforms for dialogue on AI governance, and, as Trisha was referring to earlier, to set agendas? Do we have a shared language to discuss shared challenges and opportunities, and as Eileen mentioned earlier, do we have sufficiently differentiated approaches to allow meaningful engagement across and between stakeholder groups and processes?

So many questions. I am thus thrilled that we have such experienced and diverse speakers with us today in this panel who work tirelessly every day to make sure the voices of children, youth, women, minorities, consumers in all age groups, people with disabilities, and everyday workers working on or impacted by advances in machine learning, autonomous systems, and algorithmic technologies with us today to hopefully answer some of these questions, raise others, and also look for opportunities.

With me today I have on children and AI, Beeban Kidron, who is a film director and the chair of 5Rights Foundation; on women and minorities, Renée Cummings with the School of Data Science at the University of Virginia; on consumers, Helena Leurent, director-general of Consumers International; on people with disabilities, Patrick Lafayette, the CEO of Twin Audio Network; and on workers' rights, Christina Colclough, founder of The Why Not Lab.

Before I hand it over to Beeban to kick us off, let me quickly refer our listeners and participants to our Chat function. If you have questions to the speakers, please write them down, and I will do my best to integrate as many as time allows for in the follow-up dialogue with the speakers following their initial three-to-four-minute presentations.

First of all, I am delighted to hand over the floor to Ms. Kidron.

BEEBAN KIDRON: Thank you very much, and thank you very much for having me. I am about to break out in violent agreement with the previous panel. Many of the things they said are the background of the work of 5Rights.

Let me just say that the 5Rights Foundation works wherever children and the digital world intersect. I think most of you on this call will realize that that is increasingly everywhere. We look at this from the gaze of three very practical views. The first is data protection and privacy. The other is child-centered design, and the third is children's rights. Our organization is set on implementable change.

Something I think was bubbling up in me during the previous panel, but if I could just make the very obvious and overlooked point that the AI revolution is as much a management revolution as a technological one. What I mean by that is that the platforms in general that we have in our sights do use tax laws, patent laws, intellectual property (IP) laws, and so on, and yet it has claimed a place outside the laws and rights that protect society more generally.

If automated systems are optimized for the benefit of these companies and if they are optimized for the commercial exploitation of adults, then what we find when we get to the children piece is that they are not recognizing children. They are not recognizing children's rights, and they are not recognizing that children have in most societies existing laws, existing rights, and actually a culture by which we treat them with their evolving capacity.

So my argument and our argument is that we are gradually in an on/off world actually eroding—we are not only not embedding the rights that children should enjoy, but we are eroding the rights and privileges that they have enjoyed so far. It is a backward, regressive step.

I want to take two examples very briefly to show how actually leveraging existing laws and existing rights on behalf of children actually starts to push back at some of this erosion and why actually the governance point is so important. The first one is—and I suppose I should declare as others have done, I am a member of the House of Lords here in the United Kingdom, which is our second chamber, and in the context of the Data Protection Act of 2018 I was able to introduce data protection for children. But on the face of the bill we alluded to children's rights, to the United Nations Convention on the Rights of the Child (UNCRC). We are a signatory; 196 countries are signatories. In alluding to that and getting it on the face of the bill, something magical happened, and that was that a child suddenly became defined as a person under the age of 18.

As many of you will know, one of the things that has happened to children in the digital world is that there is this magical age of adulthood of around 13. So in one fell swoop, by taking a lever that already existed we actually brought protections for 14, 15, 16, and 17-year-olds under the statute bill.

We also managed to embed the concept of best interests, which is another concept inside the children's rights framework. As others have said, this actually is not just a philosophical position, but it is a practical position because the final law ends up saying that it is unlikely that the regulator will deem the commercial interest of a company to be greater than the best interest of a child when it comes into conflict. And as many of you will know, that is a quite distinct thing.

Let me move to another example, which is that over the last three years we have had the privilege on behalf of the Council on the Rights of the Child of drafting a General Comment. A General Comment is the authoritative document that says how children's rights do apply in a particular environment.

This particular General Comment, General Comment 25, is on children's rights in the digital environment. I would urge all of you who were talking about international cooperation to take a look at this document because it is quite a shape-shifting thing. It recognizes that children's rights must be embedded in automated systems, it recognizes that they must be by default, and it recognizes that whilst children are and wish to be users on their own account and in their conscious selves, actually many children are either excluded for a number of reasons, whether it is geography, language, or other factors. But equally it says many children are affected by the digital environment in ways in which they are not clear whether it is biometric things in the environment—facial recognition, the finger on the thing—or whether it is the way in which AI is used for exam results or indeed employment or government resources.

What the General Comment asks states to do is to make sure that children's rights are not violated by automated systems. I think this is an interesting way forward.

I am aware of the time, and I just want to finish on this point. In the course of drafting the General Comment, we held 69 workshops in 27 countries with over 700 children, and this is something we take very seriously in 5Rights. All of our work is in the voice of and alongside children's views. All I can say is that they are hugely enthusiastic about the digital world, but they are absolutely clear that they do not want their privacy invaded automatically. They do not wish to have their location displayed automatically. They do not wish for AI to target them for commercial reasons or to make assumptions about them, and their voice is very clear, and those views are in the General Comment 25. Thank you.

ANJA KASPERSEN: Thank you so much, Ms. Kidron. Very important points made there on a stakeholder group that we all care deeply about, both among participants and speakers.

The next stakeholder group we will have Renée Cummings to speak on, women and minorities.

RENÉE CUMMINGS: Thank you so very much. Really my work ascribes for more authenticity in algorithms, AI through my work as a data activist looking at ways in which we can enhance advocacy and do more active evangelism within minoritized communities and of course women because I am very passionate about AI and new and emerging technologies in general and spend a lot of my time in the C-suite doing risk management around bias and discrimination in new and emerging technologies. I also seek to introduce or to merge the concept of risk management around AI with a rights-based approach.

So much of what I do is about stretching the ethical and creative imagination of AI and moving beyond the ethics-washing and window dressing that we are seeing with many Big Tech companies using women of color as their AI ethicists and trying to build momentum around that. So I try to bring an authentic approach to what has been seen as AI ethics "theater."

Much of the push behind my work at the University of Virginia School of Data Science is about amplifying the voices of diverse communities and doing a more critical analysis and examination of diversity, equity, and inclusion within the tech space. Something that I have been looking at would be a critical examination that speaks to a trauma-informed approach to understanding data and understanding how data has created intergenerational trauma in minoritized communities and in underserved communities in general.

Something that is very critical to the work that I do would be looking at the codependency between governments and Big Tech and striving to bring a more robust and rigorous ethical appreciation when it comes to issues of accountability and transparency and explainability and black box atrocities in general including issues of privacy, safety, and security, particularly within the realm of criminal justice. I am a criminologist and criminal psychologist, so the ways in which we deploy algorithms in sentencing, policing, and corrections are very important areas as we look at accountability and transparency and explainability.

The deployment of algorithms on the streets as well as in the corporate space are areas of my research at this moment. Particularly when we speak about the deployment of algorithms on the streets of the world we are looking at minoritized communities, we are looking at underserved and under-resourced communities, so algorithmic fairness and fairness in AI when it comes to the ways in which the technology is being deployed and who is visible and which voices are being amplified or not amplified.

It is for me about multistakeholder engagement and finding more interactive and dynamic ways to engage communities across the board in understanding this technology because if we really want to see a mature AI, we have got to do a lot of groundwork in our communities through stakeholder engagement and building community efficacy around the technology and enhancing the ways in which we can engage communities through the different ways we may choose to communicate about this technology. I strive to engage more women and engage more people of color and engage just about every group in understanding this technology.

Most people ask me: What is a data activist? I always say a data activist is the conscience of the data scientist. So my work is situated in algorithmic justice. It looks at algorithmic decision-making systems in criminal justice, in particular sentencing and policing, where most of my work is situated, and of course corrections and just about anywhere where algorithms are being used unjustly.

I thank you for that.

ANJA KASPERSEN: Super, Renée. Thank you so much for those very insightful comments.

Then to an issue that concerns all of us, being consumers of these technologies. I am delighted to hand over the floor to Helena Leurent.

HELENA LEURENT: Hi, everyone. Thank you so much, Anja. It is wonderful to be joining this group.

Some of you may be asking: Why are we talking about consumers specifically on this particular panel? Surely consumerism and the way in which we serve consumers is the source of our problems and why we are in such a pickle.

In fact, we would argue, no. Because who speaks for you as a consumer, that is, as somebody who is an actor in the marketplace, where we need a fair, safe, and sustainable marketplace for everyone, and where everyone who is an actor in that marketplace has both rights and responsibilities?

I loved what Beeban talked about earlier. This is the basis of a legal framework and system which is in place. It is the consumer protection, which is invisible to many of us in certain countries where it has been in place for a long time but is in fact very new in others. It is the consumer rights principles that underlie that that I think can be a source of innovation, social justice, and meeting our SDGs, and part of this puzzle that we need to build together.

You may not know that consumer rights and consumer principles were not talked about until about 1962, when it was JFK who brought up four. These now sit at the United Nations in the form of Guidelines, which United Nations Member States have signed up to. In 1985 those were updated to include sustainable consumption.

It is intriguing to me that so many of us, even if you are trained in marketing and marketing and ethics, do not know what basic consumer protection rights are. They are: the right to be informed about the products and services that you buy; for your voice to be heard back into the marketplace; for there to be choice; for there to be safety in the products and services; for there to be redress; for us to be informed about sustainable consumption; for all of those rights to be as strong online as they are in the traditional marketplace; and beyond that that our market should serve us in terms of health and the environment.

If you start actually thinking about those and thinking about how they apply, it becomes an interesting conversation. Remember that in certain places like Zimbabwe and India just to name two there was not been a Consumer Protection Act in place until about 2019.

Where we come in—and Consumers International is a network. I loved what was said earlier about the power of networks. We are 200 groups in 100 countries around the world that fight for those consumer rights, either by advising consumers, by testing products, by advocating to business or government, by creating collective efforts of consumers, and even by representing consumers in class action. What is amazing to see in a fragmented world is to see on calls Bhutan, Uruguay, and Uganda, just to name three, collectively talking about the ways in which consumer rights are expressed in the marketplace.

This applies then to the digital world and AI, and what we look for is not how this is made but the impact that this has in the marketplace:

What is particularly interesting as we try to build the next stage of governance together, our German member, BZBB, highlighted how the work being done to build e-commerce at the World Trade Organization may actually undermine our need for transparency which is being promoted within Europe. It is this sort of what is actually happening in the marketplace for us as ordinary consumers that is so valuable then to put back into these conversations. What we do then as Consumers International is to bring this together, try to create the international picture, and then bring that into conversations such as at the OECD and at the United Nations Conference on Trade and Development, where this can become a good collective conversation.

How we improve this, which is the question, is: Let us develop those types of mystery shopping, of ways in which we can highlight where problems in discrimination are being generated faster and bring them to the attention of those regulators who can act in the international cooperative and collaborative efforts that they are building, sharing with consumers more about what they can watch out for, recognizing that they have a very limited period of time, and there is an awful lot that we are expecting consumers to understand; finding agile ways to connect into policymakers; engaging the younger generation of consumer advocates, which gives you incredible faith, because this is a group who have the aspiration toward a fair, safe, and sustainable marketplace, and some of the incredible tools in both advocacy and legal consumer protection to be able to change; and finally to start thinking about consumer protection by design.

I mentioned at the start that it is extraordinary that you can go through an entire education on marketing design and talk with groups of advertisers, and they have never once heard about consumer rights. It comes at the end of process in business. It comes as a legal check and balance at the end.

This is a source of innovation. This is a source of building in trust from the start and a source of reaching forward in particular to SDG 12, which calls on us to create an entirely new system for our marketplace for sustainable consumption and production.

I would love to see as we move toward 2030 and beyond a regeneration of our understanding of the basic principles of consumer rights and a revolution in the way that we incorporate them in how we think about our marketplace.

Thank you, and it has been a pleasure being able to listen to other panelists and learn, and I look forward to continuing the conversation.

ANJA KASPERSEN: Thanks, Helena. Very interesting from Helena here on both the role of and also in some ways the untapped potential of consumer councils and consumer associations when discussing future governance of AI.

With that I want to hand over the floor to Patrick Lafayette.

PATRICK LAFAYETTE: Good morning. I would like to thank the Carnegie Council for Ethics in International Affairs, the city of Prague, and the World Technology Network for inviting me to this second event for the International Congress for the Governance of Artificial Intelligence. All protocols observed.

My presentation does not reflect the views of all registered categories of disabilities within the Caribbean. I can, however, represent to some degree the opinions and concerns of blind and visually impaired persons within the Caribbean region.

As we focus on the necessity of meaningful inclusivity in the context of AI governance and inclusion for persons with disabilities, we have to look at establishing these groups of persons, agencies, and special interest groups as contributing partners that have significant impact on the effective governance of the AI revolution.

Additionally, in order to stimulate cooperation among stakeholders there has to be ongoing dialogue across the region to build capacity and to navigate the challenges posed by new and emerging technologies. Many disabled persons foresee compounded risks in the use of AI unless there is commitment to and prioritization of privacy, ethics, and bias. All types of AI need diverse data sets to prevent algorithms from learning bias or coming up with results that discriminate against certain groups.

When I apply for social goods or public services, for example, the fact that my data information would indicate that I have a disability could be viewed, calculated, and determined as a negative and not in my favor to receive these goods and services. There is a critical need to establish AI standards in order to ensure guardrails against violation of privacy and other human rights. When proper ethical practice is observed and bias and discrimination are policed, violators must be severely fined and/or punished through the courts.

Therefore, as we look forward to legislation of AI policies in governing the AI revolution so that full inclusion of persons with disabilities occurs, we must then strive to achieve effective AI governance through the application of a multistakeholder approach where the matrix and the treatment of data must necessarily be open data and accessible by all, from construct to application.

We in the Caribbean need to pay greater attention to making our own contribution to the governance and technological advancements in the development of AI in the region. It is not a far stretch to see how disabled individuals can easily be disadvantaged and marginalized by legislations and policies brought about through a lack of proper inclusion, rules, and regulations, or representation within the construct of AI governance that protects the rights of the individual.

As Caribbean residents and citizens of independent states we already have to deal with the problem of our disability, the challenges of our poor economic condition, the lack of viable employment opportunities, and proper educational tools and facilities. In most cases these communities throughout the Caribbean have a great reliance on governments for care and social service.

Despite the charms and the allure that technology offers we are nonetheless cautious. We cannot ignore the stigma attached to government and the tech industry who have been guilty of trading and selling personal information. This to us is a major concern. We are in the category of the vulnerable, and we are concerned that our choice and options might be limited to our compliance or willingness to part with personal data.

We request that when the AI policies construct is implemented, the disabled people of the Caribbean can be assured that their governments will establish guardrails, watchdog groups, and algorithmic constructs to ensure parity in the governance of artificial intelligence.

I am Patrick Lafayette. Thank you for your time and attention.

ANJA KASPERSEN: Thank you so much, Patrick.

With that, to our last speaker before we have a small discussion following, Christina Colclough, the founder of The Why Not Lab.

Over to you, Christina. I saw you were very active on the Chat, so I am sure there will be many tangents to your intervention here we can also follow up after.

CHRISTINA COLCLOUGH: Thank you very much. It is always a little bit daunting being the last one because so many good and valid points have been made.

I want to pick up on what Patrick said around the governance of these new technologies of AI. In workplaces this governance is in the majority of cases totally lacking when it involves the inclusion of workers. Their voices, their agreement to first the surveillance they are subject to but also to how their data is used, how it is inferred, for what purposes, if the data is sold, and so on.

This is stunning to me. It is stunning that the majority of us in various forms, being self-employed, in the informal economy, as employed workers, and as digital labor platform workers, we are workers, and yet this whole notion of co-governance of algorithmic systems is totally out of fashion.

And here a little wink-wink with a smile to Renée, in your work you do with the C-suite, include the workers. As I said to the OECD ministers when they adopted their great AI principles, which I saw Joanna was referring to before in the Chat, I said to them: "This is great. Now you must ask, fair for whom?" What is fair for one group of, in my case, workers, is not necessarily fair for the other. How can we make those tradeoffs and those decisions explicit but also consensual in the sense that we at times might have to have positive discrimination towards one group, and what is our preparedness for this?

Then you can ask: What if we don't do this? The Why Not Lab, why not? What would happen if we don't do all of this? Then I am afraid that the current commodification of work and workers will continue to the extent that it is almost beyond repair when the inferences about us predict our behavior, where we as the humans become irrelevant, where we might be chatting in three years at a conference like this around how do we defend the right to be human. For all of our fallacies and beauties and good sides and bad sides, this is what is at stake.

Unions have always fought for diverse and inclusive labor markets, and I am very afraid—and I think Renée's work in criminology points in this direction—that we are heading toward a judgment against a statistical norm that will exclude lots and lots of people and therefore harm the diversity and inclusion of our labor markets.

My call here is very much let's find a model for the co-governance of these systems. Let's put workers first. We have the principles in AI of people and planet first. But we cannot do that if we actually do not bring dialogue back into vogue.

It is also very telling that if you look at the data protection regulations across the world, either workers are directly exempt from being able to enjoy those protections or worker's data rights are very, very poorly defined. We have that in the CPIA. We have that in Thailand, in Australia. The GDPR even had in their draft GDPR stronger articles on specifically workers' data.

My call here would be to bring dialogue back into vogue. We have to stop enemizing one another. We should definitely work on workers' collective data rights, move away from the individual notion of rights as enshrined in much of our law to collective data rights. We need to balance out this power asymmetry which is growing to a dangerous and as I said irreparable level, and then we must talk about the regulatory requirement for the co-governance of these systems, thereby not saying that workers should have the responsibility that must lie with the companies, the organizations who are deploying these systems. We need much stronger transparency requirements between the developers and the deployers of these technologies. We must avoid a situation where the developers can hide behind intellectual property or trade secrets to not adjust their algorithms, their training data, and so on.

My last call is we need our governments to up their game. This cannot work under national different laws. We need the Global Partnership on AI (GPAI). We need OECD. We need the United Nations to start working towards a global governance regime that also caters for value and supply chains and that also caters for varying economic situations in each country, that we must stop what I call the "colonialization" of much of the developing countries around this digital change.

This is a macro thing. We need governments to regulate, we really need to get them to the table—I am on the board of GPAI, and can say there is resistance to commit to any form of joint regulation—and we need companies to include their workers. Data protection impact assessments on workers' data must include the workers. Algorithmic systems deployed on workers for scheduling, hiring, or whatever must include the workers.

Then we have to all stop enemizing one another, and we must also realize that most of the people listening to this are workers, and we should have a voice. Thank you.

ANJA KASPERSEN: Thank you, Christina. I think that is a great motto we should all take away from this discussion: "Bring dialogue back into vogue." Next time we will have everyone wear similar T-shirts with that on it. We have to start somewhere.

Lots of really interesting points. I have some questions that came up, listening to all of you. There have been some exchanges in our Chat with some questions and comments back and forth. We are trying to look at how we can engage with participants more proactively beyond Chat functions. We are still looking at that.

Before coming back to each one of you, what I will do in the remaining time we have left is go in the same order and ask you to comment on what you heard from your colleagues, and I may drop in a question here and there.

First of all, I would like to bring in Yi Zeng, who you heard speak before, because he raised a very important point in the Chat around data ownership and how to engage with those who build the systems. It brings too a couple of the points that Helena, Christina, and I think also Beeban were discussing: How do you actually engage those who are on the receiving end of it? Since we are discussing the how, what would that look like? How do we do that?

So, a very quick comment from you, Yi, since you raised this in the Chat, and maybe questions to our speakers.

YI ZENG: Maybe not a question but a more extensive discussion. I raised the data erasure problem from a scenario of an international company saying that we are based in the United States. We have branches in China and of course in Europe, and in Europe they are totally compliant with the GDPR. They are technical AI researchers. What I said is they choose not to talk about erasing data features and concrete data of users from AI models. They emphasize the possibilities and what they do erasing data from databases.

To me, as a technical AI researcher, I was like, this is totally cheating because you know the data is still in the AI model in many ways. There are some algorithms that you can reverse those data back to who the data is from, and the data can be recalled in some way to its concrete form, although the version in the database is deleted.

Let's focus on this example. What I would suggest is that the technical researchers for now are not really responsible. We have to take care of the concrete design. I think the AI ethical principles write very beautifully in every version of what we have in the world, while for many of them they are not technically plausible, so you have to have a lot of technical AI researchers focusing on making it real instead of keeping it as a sound claim.

That is the real world we have to face. The worst part is people choose not to talk about it.

ANJA KASPERSEN: Thank you so much, Yi Zeng, for adding that to the technical side of the discussion.

Beeban, over to you. It is interesting now to get Yi's point on translating good intentions to conscious coding essentially. You mentioned that a lot of this is around management, and I thought that was a very interesting point, that a digital revolution is as much an issue around management revolution, and there have been quite a few for some time now who have argued that we do not have the right type or right level of AI fluency among leaders, and I am now referring to leaders quite broadly, from teachers to government officials, to actually govern this. You mentioned a few examples. You touched upon it in your presentation. COVID-19 has shown us that our kids' digital footprint is larger than ever. Yi was just mentioning how the retention of that data is not really being spoken about.

Also, listening to your co-speakers, can I challenge you to add a few more comments to your initial intervention?

BEEBAN KIDRON: Definitely. I want to thank everyone for their contributions because it built up like a Jenga castle of what there is to do.

I want to pick up on something Patrick said about AI standards, but I would like to do a bit of a shout-out for data minimization principles. We spend quite a lot of time talking about what happens once it is out there and the problem of chasing, deleting, holding, inferring, and so on, and I think there is a lot of work to do upfront about what is taken in the first place and on what basis and to what end may inferences be made. I think that is something that as a community we should put a bit more focus on.

But I do want to pick up on Patrick's thing. In fact, I think to be fair a couple of other people including Helena mentioned this idea of standards, both literally standards as in IEEE standards, International Organization for Standardization standards, British Standards Institution standards, those kinds of standards that we use across many industries including this one and can be certified and can find their way as a soft international way of doing certain things, but also standards in terms of actually getting legislators and regulators to say what they mean: What is the floor?

One of the things is that I, who am quite keen on regulation, also am quite keen to point out that regulation should be a floor. It should not provide a ceiling of what is best. I think there are many versions of best and there are many versions of interpreting standards and systems that will give us an innovative, creative, and rich world, but what we want are standards that mean that there is a floor beyond which people cannot sink.

To that end, I have to say my experience—I don't want to over-claim it, but I have spent a lot of time over the last couple of years with engineers, and I cannot tell you how many times I leave the room and the engineer is saying as they say: "I never thought about it like that. I have never thought about it like that."

I think the point of standards is to force people to think about it "like that," and the "that" in that sentence is our collective will, are these international agreements, are the laws that were mentioned by others, and are the rights agreements, the SDGs, and so on.

In particular, we have been working on a standard, the IEEE 2089 standard, which sets out an age-appropriate framework for those companies that want to engage with children, and it literally goes through a set of processes that an engineer must consider as they design their system. Some of those things will be difficult. Some of them will fall into the category of: "Yes, we can do that easily. We just never thought about it like that." Some of them actually then come back to this corporate will, political will: What are we willing to say we want for society?

That would be my comment.

ANJA KASPERSEN: Super. Thank you, Beeban.

We have another eight minutes to go, so I will go to all of you before we wrap up.

Over to you, Renée. You spoke about authenticity, diversity, equality, and equity, which are key features in your research. You also spoke—and I thought it was very interesting—that without critical examination of the traumas that get reflected into understanding data we are not going to come out on the right end of doing this responsibly.

While you think about what your co-speakers were speaking about, if you can also elaborate on that particular point, because I think that is an important point which may not be otherwise well understand, that trauma of data.

RENÉE CUMMINGS: Thank you. I think it is really about intention, AI with intention, and what is the intention behind the technology? Because it is about agendas, and it is about what is given priority within the C-suite, within the design space.

As an AI ethicist and data activist I work a lot with data scientists, and it is about, as I said, stretching the ethical and creative imagination. But you can only do that by confronting the tough questions and drilling deep into concepts such as implicit bias in technology, looking at bias and discrimination in the data sets. We are not going to get to data ownership, transparency, explainability, and all these fantastic things without being honest and without looking at that source of implicit bias.

Within the C-suite it is beyond sometimes the conceptualization that what is required would be high levels of intellectual confrontation. So what is required in that space is multidisciplinary thinking when it comes to this technology. As a criminologist and criminal psychologist and someone now working in data, what I spark in that space is an extra kind of interest in ways in which we need to merge different fields and disciplines with this technology.

The challenge at the moment is that every organization, every agency, and every Big Tech company has ethical guidelines and standards of conduct and professional standards that must be applied, but we are not seeing that. We are also seeing many Big Tech companies trying to come up with technical approaches to deal with concepts or bias and discrimination, and it is not delivering the kinds of results that are required.

So it comes back to the thinking behind the technology, and it is something that we are really not looking at. So when I speak about authenticity I am speaking about being honest, and what we are not seeing is that honesty.

Within the context of trauma what we have seen with data—we know that data is a power system, and it is about privilege: Who does the collecting? Who does the analysis? Who does the classification of the data? How is this data used? What it is being used to do? So when we think about the ways in which we are using data—it is a cultural system, it is a political system, it is beyond an economic system. We have to think, and that is what we are not seeing, that authentic approach to understanding data.

When we think about the transmission of intergenerational trauma, your data is now your DNA, and we have got to look at that. How we are seeing in certain communities data is denying groups access to opportunities, access to resources, and access to the kinds of interventions that are required for some communities particularly undeserved, under-resourced, and minoritized communities to build wealth or to participate equally in society.

It comes back to the courage, the will, and the ability to get real about AI because what we are also seeing is that when we look at many of the institutions, the organizations, and the associations that are developing these ethical frameworks and deploying them, you are looking at these organizations and saying: "Where is the diversity? Where is the inclusion? Where is the equity?" We have to check ourselves and check the thinking behind the deployment of this technology.

ANJA KASPERSEN: Super. Thank you, Renée.

We are going to check in with the consumers again. Helena, you said one interesting thing, which is that we have to find better and more agile ways to connect consumers to policymakers, which I think is a very important point. It sounds easy, but it is not so easy, I know.

HELENA LEURENT: Picking up on the point about standards, you actually have consumer experts who are brought into standards making. Often they are voluntary and finding the investment to enable people to do that so that it is a balanced conversation. That would be one area. Two, you have consumer advocates in at least 100 countries around the world. Many times those organizations are technically funded by government with some support, which may or may not turn up.

There are mechanisms in place which we can enforce and make work. There is something about if you want to have a Marshall Plan in terms of meeting the SDGs you cannot create everything new. There are some things that are beautiful and that can be reinvigorated and brought to life in the way that we need them to work now. To be brief, that would be my reaction.

ANJA KASPERSEN: Super. Thank you, Helena. Important points.

Over to you, Patrick, following in the same vein around standards and inclusion.

PATRICK LAFAYETTE: Thank you. I would like to echo something from Renée. When she said "Data is your DNA," that resounds very much throughout my community. One of the issues that we have is with privacy. A great deal of the information that we receive also is how it is interpreted for us.

I will give you an example. Some of the tools that we use to identify and interpret things, like photographs, for example. I am given a photograph, and I now have to visually determine what is in this photograph. Similarly, if I go out on the road, I can wear glasses, and for the time being I will have an individual be my eyes. But we are looking ahead to where AI will be my eyes. How will some of this interpretation and discrimination of my visual, my ocular reception, how will this occur? What will discern for me or dictate what it is that I see, what it is that I get a chance to interpret?

These are some of the issues we have when these algorithms are constructed. How much of it takes into consideration the disabled? Being data, our own data, we are very concerned about how our data is used, how it is accessed, how it is sold, and how we are discriminated or marginalized, and these issues resound through our community. So when Renée said that our data is our DNA, that resounded with me definitely.

The tools that we use, for you are just an average phone. For us, it is body parts. We rely directly for just about everything in the use of these devices. Unfortunately, we find that these devices are often priced outside of the reach of the ordinary blind individual, so we resort to what we can afford, and when we resort to what we can afford we also have to resort to applications and supporting third-party tools that we have no input into. We are just receptors of the information. So then any information can be dictated to us and we just have to receive it.

Data is our DNA, and I will take that statement with me when I leave this presentation.

Once again, thank you.

ANJA KASPERSEN: Thank you so much, Patrick. Very important points. We could continue for a very long time just on this point, data is your DNA and experiences become part of the transaction that is becoming part of AI as we roll out AI into our lives.

Over to you, Christina, as the last speaker again before we wrap up. I don't have a specific question for you because we have touched on so many areas that relate back to the worker's thing. I will quote you on the vogue part. But you ended your previous comment speaking about the co-governance of these systems and how companies also need to include their workers. Maybe you would like to have a final comment on what that would look like.

CHRISTINA COLCLOUGH: Referring to the Chat, because one of the participants, Maria Axente, was talking about that things are changing, that more and more companies have ethical boards, committees, and so on. I, a little bit provokingly, answered to that, well, I don't give a hoot about those ethical boards or committees if they do not include the voices of those affected by these systems. Of course, in companies, in human resources, that would then be the workers.

You cannot talk ethics without being inclusive. Let's face it. A lot of impact assessments that are being made or the audits are in the legal compliance section and down there, and it is in no way included in any form of dialogue or inclusion.

Then I want to pick up on something Patrick said again. Patrick, you seem to be the target of my back comments here. But that is on the affordability. I think we have to aware, number one, that a lot of these systems that are being deployed are very expensive, that is, we are breaking the market between small and medium-sized companies that cannot afford these technologies and those who can, and we are seeing a trend now of off-the-shelf, unadjustable, as Patrick was saying, solutions which do not put privacy, ethics, or any of that type of talk very highly, which are more affordable, yes, but I would say more dangerous towards DNA as people.

My last comment would be DNA, yes, it's personal, but it's also cultural. It's also a community thing, and we must take culture far more seriously. I think the colonialization that is going on right now on a geopolitical level and on a technical level has to be discussed in far more detail.

I want to thank you for inviting me to be here and for being able to listen to all of these wonderful speakers. It has been great. Thanks.

ANJA KASPERSEN: Super. Thank you so much, Christina.

With that, I will hand over to Wendell Wallach. I want to quickly summarize four key takeaways for me:

(1) Make dialogue vogue;

(2) Take culture seriously;

(3) Data is our DNA;

(4) And think more about it "like that."

With that, Wendell, over to you, and a huge thank-you to our stellar, excellent speakers for sharing their insights and to everyone who has been listening.

WENDELL WALLACH: Thank you ever so much, Anja, and thank you to all of our speakers on both panels. We keep hearing the words "inclusivity" and "multistakeholderism" as if they are mantras to be repeated over and over again but without this level of depth and thought about what might truly be entailed as we get involved in multistakeholder governance, so I am most appreciative for the input we had today.

As we structured today's session we put a particular emphasis on those stakeholder groups that are underserved by today's governance structures, but of course they are not the only stakeholder groups, and the discussion perhaps also underscored who we didn't include. For example, in Prague we would have had a panel with very young thought leaders. Of course, even though we alluded to the needs of the younger generations, we did not have any stakeholders who actually spoke for them here.

Another thing we would have done in Prague is we would have opened with panel of AI researchers who have tirelessly championed ethical concerns. In addition to Yi Zeng, who you heard from today, we would have included Francesca Rossi, Stuart Russell, Virginia Dignum, Yoshua Bengio, Jaan Tallinn, Toby Walsh, and so many others in the research community who are taking a leading role.

What I would like us to turn to for the next 20 minutes is a conversation with Maria Axente and with Amandeep Singh Gill about some of the stakeholders who are often seen as having power and the role they are playing in structuring the international governance of AI. In that respect I mean the role of corporations, of governments, and specifically of the strategic defense planners within those governments.

Let me first begin with Maria. Maria is responsible AI and AI lead for good at PricewaterhouseCoopers (PwC) in the United Kingdom, and together with her colleague Rob Cargill they have had a nearly ubiquitous presence at AI policy conferences for many years now. But Maria I think can inform us not only about her perception of the role of corporations in AI governance from the corporate leads that she interacts with but also can give us a little bit of input from some of PwC's research with corporate leaders.

I know, Maria, you have a somewhat different take than perhaps was expressed by some of our panelists on whether corporations are playing a constructive role or whether they are engaged in ethics washing and opacity by design, as they have sometimes been accused of doing. Of course, we do have a few examples, such as Apple on privacy and Microsoft on the use of facial recognition software for surveillance and law enforcement, but I am hopeful that you can give us much more perspective from your interactions with corporate leaders and from the research done by PwC.

MARIA AXENTE: Thank you very much, Wendell, and good afternoon, everyone. Very interesting conversation. I wish I could have attended from the beginning.

From the perspective of PwC, because we are not a technology company, we are a professional services company, we see quite a lot. We talk with everyone who is anyone, including the tech companies, governments around the world, but also companies who are not tech companies. We do quite a lot of research. We talk with the C-suite, the executives working in AI, just to understand a little bit what is the actual reality in the field. What is the pulse? What are executives thinking and doing when it comes to AI.

Every single year we run a CEO survey. We interview close to 5000 CEOs of organizations small, medium, and large, so we address not just corporations but everyone who is in the private sector to understand what are their priorities.

Then, surprise or no surprise, in the last two years AI was not a priority for the C-suite. In fact, AI was not even making the long list of priorities in the C-suite, and that is because in the previous year AI was a priority and the C-suite considered they had done enough, meaning that their business strategies are more aligned with AI, and therefore they don't need to be concerned that much. But they are very much aware that AI now has the potential to be scaled up, hence they allocate more funds not just for experimentation but for scaling up and accelerating adoption.

The reasoning that makes me quite optimistic about the state of responsible AI and AI governance is that the message was understood. The external engagement we have had in the last four years alone was heard by those executives, and a lot of effort has been done to take those ethical principles and translate them into something that is workable, into actions that can support not just the technical teams who know how to deal with the complexities of ethical principles and make them relevant for everyone who is involved, all different stakeholders but also those who own those solutions, also, very, very importantly, everyone else who is outside the technical remit, the owners, the ones working in different functions and different departments and being able to foster a culture of collaboration and communication, and that is not easy.

The debate I had with Christina on the Chat is to say that we have heard from our clients. They are doing everything that needs to be done, but it will take time for us to see those results because what we see right now has been building for the last 15 to 20 years, where ethical considerations were not included in the design. So I expect that we will see better AI being built in the years to come.

WENDELL WALLACH: What do you expect that we will see? You say it will be better in years to come. Do you think we will actually see corporations override their fiduciary commitment to their stockholders if that actually means there are areas in which AI could be dangerous or could be destabilizing? Do you think the corporations will be willing to take up more of their share of the social costs of these technologies that they are implementing? Oftentimes they are given latitude to innovate but not necessarily responsibilities for areas in which the technologies they implement may be of disadvantage to some stakeholder groups.

MARIA AXENTE: Call me an optimist, but I will say definitely. This is based not just on the fact that we see a different approach of building AI and governing AI. It is the fact that we have quite an outreach activity around companies desiring to align profit with purpose.

It is surprising and not surprising. Companies that are quite serious in looking at environmental, social, and governance (ESG) and making ESG not something that is a side of their activity but being the core that drives the purpose of their organization. That is on one hand.

So the leadership is very much aware. With all the social changes we have seen with Me Too, Black Lives Matter, and all the change in perception, this has now translated into commitments from a lot of the CEOs that create almost like a critical mass and put pressure on the CEOs to react. I am not going to comment too much on regulation that is coming.

On the other hand we have the consumer, who is starting to demand more and more ethical technology to be deployed where competitive advantage is not being seen as something that the C-suite will grasp. It will be the pressure of the consumers or the pressure from the regulators and public policy that will drive that change. But I think better times are ahead.

Looking at the reaction we have from our clients and their commitment towards changing the way they operate around the challenges with technology, I say with confidence that we will see better times ahead. But there will be still areas where we need to be on our toes and be able to continue to challenge and apply pressure because not everyone will do this willingly and will give up the privileges they have. So we will need to keep an eye on areas where things do not happen as fast as they can, but we also should acknowledge that good things are happening and that should give us not just optimism but the boost we need to continue.

WENDELL WALLACH: I think we have been using a language which, just to be parsimonious, is a bit generalized, as if all of these corporations fall within one package. I guess my question is: Is that true, or do we have to discriminate between different groups of corporations?

On the one hand, we have the IP oligopoly, which has become the AI oligopoly, these companies primarily in the United States and in China that in many respects are more powerful than all but a few countries, and on the other hand we have thousands of smaller companies that are deploying artificial intelligence in very discrete silos and discrete industries, whether that is healthcare or finance. Are you seeing the leadership coming from the larger, multinational corporations? Are you seeing the leadership coming from the middle-level companies that are engaged in particular sectors?

MARIA AXENTE: I think that is an excellent question. Not all industries are equal among each other, and also we will see different responses from different companies. It is a mixed result, so we see leaders from big organizations that are quite visionary. We also see some that are not following the trend but also because the pressure that comes with responsible and ethical AI means a lot of organizational change. They need to be treated accordingly. We need to provide the right incentives, both positive and negative, for that change to happen.

But where we see much more interest is in the small and medium organizations, where the change is probably a little bit less painful, and they recognize that responsible and ethical AI can be a competitive advantage. How this will play out in the big scheme of things I think will depend very much on first the country and the regulatory regime and the play of the governments to address those issues, not just via a regulation that would take a lot of time to be created but also by a smart policy and incentives for the right behavior but also be the future of all those industries. Remember, one of the things that we learned from our CEOs is that AI is a technology that is embedded in the fabric of our society, so we need to understand AI in the context or where the value is being created and what happens when that value is being consumed in each of those industries.

I would say that the evolution of this going forward will depend on a lot of factors, but at least we have a good start that will allow us to get us on the right path significantly.

WENDELL WALLACH: Thank you ever so much, Maria, for this perspective.

Now let me turn to a little bit of a conversation with Amandeep Singh Gill. Amandeep is now the CEO and project director at I-DAIR, but before that he was Indian ambassador to the United Nations in Geneva. In that role he had a front-row perch for the early conversations of the Convention on Certain Conventional Weapons. For those of you who are not aware, that group is the group that overseas the creation of arms control treaties, discussing LAWS—lethal autonomous weapons systems. There were three years of preliminary meetings, and then Amandeep was invited to lead the next three years of discussion with a group of governmental experts that looked at how we might be able to move forward at least on limiting if not actually restricting the deployment of lethal autonomous weapons systems.

I am going to start talking with you in that role, Amandeep, and then I want to turn to another role you took afterwards. But in that role, you—and Anja Kaspersen, who was working with you—were able to get the countries to move forward with at least some preliminary limitations on their behavior, but there seemed to be a fundamental failure in getting a treaty in place that would actually restrict the deployment of lethal autonomous weapons.

I don't want us to talk specifically about lethal autonomous weapons but about this broader concern that the defense establishments in the larger countries, and more importantly, the international security dialogue really dominates an awful lot of what we consider as far as the governance of artificial intelligence and dominates sometimes in a way where we are being driven toward the weaponization of AI and perhaps a new AI arms race as opposed to allowing these more domestic concerns, which have been given adequate expression throughout the rest of our symposium today.

Can you comment on that a little bit, of how you view the role of the defense establishment and the international security community in the governance of AI?

AMANDEEP SINGH GILL: Thank you, Wendell. There is no military-industrial complex if we look at the AI scene, but that does not mean that the security applications of AI are not being prioritized by the major powers. The United States, China, and possibly ten other countries are in a sense applying AI just as AI is being applied in every other field from oil wells to health to selling ads to us as consumers, so across the field from training and logistics to surveillance, intelligence gathering, and so on.

It is going to play a role but not as much as we have seen in the past with some of these, let's say, more specific arms-related technologies. Nuclear is one example, since IAEA was mentioned earlier in the debate.

The bigger concern for me is this overall very competitive view of AI and digital technologies. If you look at the 30 top economies today, if you look at the contribution of the digital economy to the gross domestic product, it is growing very rapidly compared with the rest of the economy and 16 of these countries are emerging countries, so it is a different power structure that will emerge. India and Indonesia are two examples that come to mind.

So this comfortable binary framework that many in the United States are familiar with, whether it was the Soviet Union and the United States in the past, and now for some people the United States and China, this is not going to be the future scenario, and that means international regulation efforts will be more difficult. Treaties will be hard to agree to, and therefore I think our political capital, our time and energy, is better spent on a more flexible palette of norms.

This is what was attempted in the group on lethal autonomous weapons systems. You begin with principles. You get people to agree that existing law, in particular international humanitarian law, applies, and then over a period of time you construct some normative building blocks around that.

I will stop there for the moment. I hope I have answered your question, Wendell.

WENDELL WALLACH: You did. Thank you ever so much.

In just the few minutes I have remaining with you I want to move on to your next role. When the secretary-general created his High Level Panel for Digital Cooperation you were invited to be one of the two people representing the secretariat and helping the panel itself fulfill the secretary-general's mandate.

That Panel, which by the way included Nanjira and some of our other panelists in these two sessions, came up with a bunch of recommendations. Those recommendations moved on to being turned into a Digital Roadmap by the secretary-general, but to date we have not seen many governments endorse that Roadmap. That raises some fundamental questions as to whether, for example, we will get any kind of multistakeholder representation, which was one of the recommendations coming out of the Panel.

I don't want you to restrict yourself to the United Nations, but as you peruse the various international governance initiatives for AI, what is your sense of where we are most likely to be able to move forward?

AMANDEEP SINGH GILL: Yes. Wendell, I will also bring in reference to a group on AI ethics that another panelist today, Yi, and I were part of. That happened after the SG's Panel and where we tried to take this conversation on principles for AI governance to the next level, mechanisms and ways in which you could get those principles to apply.

So it is not just the United Nations. It is these other agencies as well. But what the Panel did was start a very nuanced conversation, a 360-degree conversation, as Eileen mentioned the four pillars, across those four pillars. So it is not just about this glamorous field of AI governance, this set of rules and norms to govern it all.

The digital world is layered, and no single entity can rule it all. I think to be realistic we should pare down our expectations of a UN treaty on the Internet or a treaty on AI. What the United Nations can do is to convene these underserved communities and the voices that you are trying to highlight for one and also to decolonize this discussion a little bit and make it a little less about control and more about control in use.

Sometimes I get the impression that the idea is to just wave a finger a people: "Don't do this. Don't do that." I think what the SG's Panel tried to highlight was that apart from addressing social harm, you need to promote the SDGs, and you need to do that through an architecture that can communicate use cases and problems to multiple stakeholders and that can disseminate new data and evidence about the impact of AI and other emerging technologies so that the discussions on governance are more factful, driven by practice and the practice norms feedback loop kind of tightens. This way you also create a loop for the assimilation of soft governance norms at an early stage in the design and development work around AI.

Finally I think these new platforms, this commons architecture that the SG's Panel highlighted, could make the case for new investments into collaborative research and development as SDG-related digital infrastructures and capacity building that you need in the low and middle-income countries.

If I may have one final thought on this, essentially when we talk about AI governance it is like one of those mythical creatures that everyone speaks of but which no one has seen, so sometimes you see it reduced to a list of shared principles. Other times it is conflated with specific mechanisms for certification of algorithmic solutions or ways to protect the privacy of personal data.

What I would like to see coming out of the experience of the past few years is an approach that is conceptually rooted in the human capability concept, what people are able to do and to be, and a layered governance framework which connects the local to the global.

WENDELL WALLACH: This has been a wonderful conversation, but unfortunately we have already exceeded our time, so I am not able to go into greater depth. But before I turn over the floor to the two co-chairs for their closing comments, I hope you will indulge me in taking a minute or so to do a few thank-yous and to also give you some housekeeping information.

There were more than a hundred advisors to the ICGAI planning meetings, and there were 50 experts who attended meetings to develop proposals for Prague. So many people gave us input, and it is sad that that input had to contract itself down to these two sessions, but we are hopeful that most of what we have developed has also stimulated the broader conversation going on in many of the other initiatives.

In particular I want to think Anja Kaspersen and Michael Møller, both of whom were constantly available to me and gave me countless suggestions, many of which have been incorporated into the program you were able to view. I would also like to thank Jim Clark, the founder of the World Technology Network. Jim has been involved for a few years now in developing the Prague conference and communications and basically encouraging me to go ahead at many junctures when this initiative might have faltered, and particularly in going ahead and putting on these online meetings.

Also, I would like to thank Volha Litvinets, a graduate student at the Sorbonne, who for the past two years has helped us with a wide variety of tasks. Finally, let me thank Melissa Semeniuk, Felicia Tam, and the CCEIA media team, who have been so helpful in putting on these online sessions.

The topics that we have explored for the international governance of AI are going to continue to be covered in many forums. Jim Clark will keep you informed about any additional International Congress for the Governance of AI events. We may even gather in Prague or some other city for a congress once we are beyond the pandemic.

In addition, there is the AI and Equality Initiative that Anja and I started at the Carnegie Council for Ethics in International Affairs. We have already put together some webcasts, podcasts, and transcripts on some of the themes that have been discussed here, and we will continue to do so and continue to develop proposals for international governance in AI.

These two sessions, the one you have just listened to and the one two weeks ago, will continue to be available on YouTube, and podcasts and transcripts can be accessed at carnegieaie.org.

With that, thank you all, and let me turn this over to Nanjira and Michael for their closing comments.

NANJIRA SAMBULI: Nothing much to add over and above that Wendell, just to thank everybody for also sticking 15 minutes beyond the hour. I really do look forward to seeing how these conversations move beyond but more importantly how we go into decision-making elements.

WENDELL WALLACH: Michael.

MICHAEL MØLLER: Thank you, everyone. A really fantastic conversation today and the last time we met. I have a great appreciation for all the reflections and the inputs on meaningful inclusivity, bottom-up engagement, and multistakeholder accountability.

One of the things that struck me today was the fact that every time we have a conversation I feel there is a greater convergence of views going in the right direction. We were pretty much agreed all of us today on a whole series of issues, all of them quite important and all of them needing more attention. What is clear is that multilateral stagnation in addressing these critical issues is unacceptable but also increasingly dangerous for humanity as a whole.

We are moving forward, but my fear is that we are not moving forward fast enough. One of the problems that we have, as we all probably agree, is that the speed of adaptation and of acceptance by organizations, by governments, and by individuals to the speed of the development of new technologies, AI and others, is out of sync. We are simply not capable, certainly our political structures are not capable, of adapting fast enough, and I think that is a problem. The only way that we can speed up that process is by doing exactly what we have been doing today and making sure that we translate these discussions into action as well. People who know what they are talking about, as is patently clear we have seen today, have to be not just at the discussion table but also around the decision-making table.

Our political systems are definitely out of sync with the needs of the existential problems that we have in front of us and are in need of solutions, short-term political systems that have to take decisions that have an effect way beyond the normal election cycle, and that is a very massive structural problem that pushes us even further for this multistakeholder, multilateral approach.

One of the things that struck me also today when we spoke about the different stakeholder groups that have to be involved in the conversations is how all of us agreed that the role of youth is important. If we are going to in any meaningful way come up with solutions that these young people will have to use and have in their hands for dealing with the management of our planet, the sick planet that we are leaving them, they have to be part of the conversation, and they have to be part of how we go about our business. It means all of us stepping outside our boxes to a very large degree. It means changing our mindsets. It means not having our feet solidly planted in the past, or in the best of cases in the present, and being able to "use the future," if you want, in order to manage our daily affairs. In order to do that, we have to have the young people with us.

Let me leave you with one of my favorite quotes from one of my favorite bosses, Kofi Annan, who said very clearly, and I fully agree with him: "You are never too young to lead and never too old to learn."

Thanks to all.

WENDELL WALLACH: Thank you.

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

JAN 23, 2024 Podcast

When Science Meets Power, with Geoff Mulgan

In this special episode, Senior Fellow Anja Kaspersen speaks with University College London's Professor Geoff Mulgan on the trends shaping technology's impact on society.