Africa, Artificial Intelligence, & Ethics

Apr 5, 2021

Artificial intelligence is impacting and will impact Africa as profoundly as any continent on Earth. While some African nations struggle with limited access to the Internet, others are leaping into the digital economy with Smart Cities. Access for all, digital literacy, and capacity-building remain as challenges. How through AI and ethics can prospects for all of Africa be improved?

JAMES COTTRELL: Good day. On behalf of the Carnegie Council and the Artificial Intelligence & Equality Initiative, I want to thank you for joining us today. We are very fortunate to have some esteemed colleagues joining us as part of what we hope to be a series of discussions around artificial intelligence (AI), equity, and the issues surrounding AI and technology, particularly in Africa, and looking at it from an African perspective.

We are quite fortunate to have colleagues that span experience and from all over the world.

Nanjira, why don't you introduce yourself briefly before we get going?

NANJIRA SAMBULI: Hello, everyone. I am Nanjira, based out of Nairobi, Kenya. I research and analyze tech, public policy, and global governance.

JAMES COTTRELL: Thank you. Corinne.

CORINNE MOMAL-VANIAN: Hi, Chip. It is very good to be here. My name is Corinne Momal-Vanian. I am the executive director of the Kofi Annan Foundation. We are based in Geneva. I joined the Foundation last year after a long career at the United Nations. We deal a lot but not exclusively with the protection of democracy and electoral integrity.

JAMES COTTRELL: Thank you, Corinne. Warren.

WARREN HERO: Good evening, good morning, good afternoon. Thank you to the Carnegie Council and Chip for inviting me. I am Warren Hero. I am a technologist. I was previously the chief technology officer for Microsoft in South Africa as well as on the continent, and currently I am both the chief information officer as well as the chief digital officer for Webber Wentzel, a large firm of attorneys working out of Johannesburg.

JAMES COTTRELL: Thanks, Warren.

As you can see, I am Chip Cottrell. I have had a really fortunate career. I have spent most of my time working and living overseas. I currently am a director at Holland & Knight, which is a U.S.-based law firm, although I am not an attorney. I spent the better part of 20-plus years at Deloitte, leaving there as the global chief ethics officer.

This is a broad topic, and I am excited about the platform that we have to develop this.

Corinne, the Kofi Annan Foundation works around democracy, works around equitable approaches. Talk to me a little bit about how you see technology and the whole issue of artificial intelligence as it relates to its impact on what you are doing and what the Foundation is doing.

CORINNE MOMAL-VANIAN: Sure. Let me backtrack a little bit maybe. I mentioned that indeed the Foundation works a lot on promoting and defending democracy.

What happened was in 2018, before he passed away, Kofi Annan, with his usual foresight, established a high-level Commission on Elections and Democracy in the Digital Age. He asked the commissioners to look at how essentially social media, the Internet, and the algorithms that underlie them were eroding democracy around the world, so not specifically in Africa and not only specifically in the Global South but around the world, but he did ask them to look also at what was the specific impact in the Global South and of course in Africa.

The commissioners issued their report early last year, and what they said is that in general social media does not create polarization; it just, as we know, amplifies it and intensifies it. They looked at issues of foreign interference, disinformation, hate speech, and what they said was that clearly some democracies are more vulnerable than others to this phenomenon and they described some of the factors that made them more vulnerable.

What we have done since the issuance of the report is look at specific regional and national contexts and try to identify those vulnerabilities, which are typically when societies are already polarized around religious or ethnic lines, when there is mistrust in traditional media, and so on—a lot of what we saw, by the way, in the United States with the last elections, but these phenomena exist in many countries in Africa.

What we have seen is that, unfortunately, the pace of Internet penetration especially has been so fast, and is happening now very fast in countries that are particularly vulnerable and fragile, that it is impossible for the pace of digital literacy to follow. Therefore, that increases the vulnerability of these democracies to the inherent dangers of algorithms in this political sphere.

This is what I could say as an introduction, Chip. But we really are concerned at some of the imbalances that we see as relates to AI on the continent in the political sphere, and I would be very happy to go over them a little bit deeper later if you want.

JAMES COTTRELL: I will look forward to doing that.

Nanjira, do you have a similar opinion here? I know that your focus has been different, but can you comment upon where you think the challenges are?

NANJIRA SAMBULI: Sure.

First of all, with the matter around social media, which tends to be a useful lens to understand some of these other technologies, I think a key challenge is that when these algorithms are being designed to recommend content, for example, they have not necessarily been applicable for explainability and transparency in other contexts. So you find that when we talk about hate speech, for example, we had what was going on in countries like Myanmar and many other parts of the world, where the same universal application of these tools was aggravating very different contexts. That has been a key challenge to understanding how algorithms are impacting social media use or democracy through social media, for example.

Another thing there would also be language, obviously. It is one thing to train machines to understand the English corpus. It is another thing to train them for the diverse thousands of languages spoken in places like here.

I also want to add one interesting challenge/opportunity that will be presented when we start moving more towards voice as well, whether the companies that are introducing AI-oriented algorithmic machine-learning tools will accommodate these different corpuses of language and have people, whether it is content moderation or a way to co-audit how that fares in terms of what is recommended as content to other people. That is where we are headed with this particular challenge.

JAMES COTTRELL: I am going to probe on that just a second, Nanjira. I too see this as a fundamental issue to this process. Can you talk about some of the things that you are seeing as potential solutions to help correct this?

NANJIRA SAMBULI: Sure.

The big questions now are happening around how we govern these technologies, how you have a U.S. company accountable to the citizens of a country that they are not registered in. This is going to be a very important issue because the contexts around which these technologies are being applied or being used are not as neutral as we would like to imagine, even as the technology is being designed.

These conversations are becoming very important and moving the conversation about AI governance from just the technical sphere to accommodating the very real social-political environment, and in regions like ours, which are usually left behind and decided for, it is going to be very important to make sure that as more and more people are coming online, as Corinne was talking about, they are not going into cesspools of toxic spaces, as we have seen with the way algorithms tend to reinforce those complicated and typically adverse situations on the ground. So this is going to be very interesting and a conversation to keep pushing for.

JAMES COTTRELL: Thank you.

By the way, I have seen some comments already popped up in the chat. If the audience has questions that they would like to raise to any of the panelists, please do not hesitate to put that in there and our colleagues from the Carnegie Council will bring them to our attention.

Warren, you and I have known each other quite a while, but I don't think that Nanjira could have laid a better opening for you, knowing the smorgasbord of things that you have had your fingers in, and in particular a big American company that you had spent time with that you have already identified that has an influence around this. I know it was a passion of yours to make sure that the voices were being heard as part of this. Talk about the smorgasbord.

WARREN HERO: Yes, absolutely.

I think a couple of things, starting off with Corinne talking about these imbalances or asymmetries. I think part of what we have to do is we have to not just understand that those asymmetries exist but we have to understand the nuances, because in the ways they exhibit in South Africa versus Botswana versus Kenya versus Nigeria look really different. That is one aspect, understanding the nuances of the asymmetries.

I think part of the issue is around broad technology literacy. Often in individuals that run companies or individuals that make legislation I find there is a paucity or a lack of understanding about the potential of technology, because there are both positives and negatives, and you have to understand both the pros and the cons so that we can deal with the pros and the cons.

The other aspect of course is skills. One of the significant asymmetries that we see is around skills. As Nanjira talked about the explainability, if we want to be able to understand the context of these algorithms, potentially understand mathematically as well as practically how they exhibit these imbalances or the biases, we have to have that deep knowledge.

There are, of course, development ecosystems. It is only probably in the last three years that we are starting to see really vibrant development ecosystems, but a lot of the toolset though is still not resident in Africa. I think there are maybe one or two of the digital dragons that have a footprint in Africa and where the tools become readily accessible and utilizable.

The difference between an organization that can apply the technology versus not is starting to generate significant competitive advantage and also differential competitiveness from a market point of view.

Of course, in terms of narrative, because of these imbalances we find that we are as Africa also becoming marginalized in the context of that narrative about determining the content of the algorithm or even the representation in the databases.

A lot of the disease databases are significantly skewed to European populations, not to Asian, not African, and because of that fact often individuals using these algorithms do not understand the fact that from a disease profile perspective, because there is underrepresentation of African populations, just using this directly could result in some significant issues.

And then, of course, we then have to speak broadly about the trust in technology. What are the components of trust in technology, what are of the broad components of trust, and how do we start enabling every single person in the community to be able to find their own voice and then potentially start exploring these components of trust?

JAMES COTTRELL: Warren, I am going to come back at you on this trust scenario because, as you and I have talked about quite a bit, this is potentially one of the most significant issues we are seeing, not just in Africa but globally. I have been speaking out and attempting to be as vocal as I can about the trust components on the continent, trying to look at accountability and transparency.

Nanjira, your comments around taking these initial steps, but actually having that be part of a trust—I will call it a trust "euphoria" that is almost going on right now—we are trying to see people push toward that, whether it is things that you see in the World Economic Forum (WEF) or the initiatives that they have on the continent or otherwise.

Warren, what kinds of things are making a difference? You are seeing it with a series of different lenses, so put on your digital hat here for the moment and tell me what kinds of things you are seeing there as a differentiator.

WARREN HERO: I would say to you, first of all, access. Nanjira spoke about the proliferation of Internet access, and so I think that we have to think about the platform and we have to think about the ecosystem.

When we think about the platforms that we are creating, these developing ecosystems are one of those. So whether you think about any of the digital giants, they have these vibrant developer communities, but my concern is that often those algorithms and that intellectual property that gets produced once again gets exported and we remain subjects of the Fourth Industrial Revolution and not proponents of the Fourth Industrial Revolution.

The other aspect is around skills. This is the issue about the potential of the use of micro-certifications, and what micro-certifications can do for people to be confident in a specific capability I think is really interesting. When you maybe drop out of school, you can still achieve a level of competence and then use that competence to create economic opportunity for you and your community.

Then, of course, because of the nature, for instance, of the work that we are doing in Botswana—the use of technology in being able to prevent human/animal conflict and being able to reduce that because of drought and sporadically how droughts happen; the Internet of Things (IoT) and artificial intelligence in agriculture in order to ensure food security; and then probably for me, which is really an unintended consequence, because of the fact that you are almost having this nascent connectivity that is creating this nascent transparency, is what technology and transparency can do for democratic institutions and to keep all of us accountable, whether public sector or private sector, because of the fact that when you know where to look, that information, those data points can become visible and viable.

JAMES COTTRELL: Nanjira, it looked like you were about ready to say something there.

NANJIRA SAMBULI: Warren raised a very interesting point. I think one important thing maybe to encapsulate our conversation here is maybe just because as a continent we are not net prosumers—and by that I mean creators and consumers of the technologies themselves—does not mean that these issues, or even how technically they should be applicable to our context, is to be an afterthought.

The other side of being that because diversity of data sets is the key thing everybody is going for, that puts us as a site at the center of it because of just the sheer diversity of humanity that exists on this continent.

That does call first and foremost for our own policymakers to step up and raise awareness about this and actually eke out a space at the table when these conversations are happening, whether it is at a place like WEF, where prospects of how AI could go into governance mechanisms—whether it is in health, whether it is in education, and otherwise—they can bring up these contexts and really amplify the voices of those who are sitting with those impacts on a day-to-day basis. That is one particular part.

In answering, and baking in a question that Christina Colclough asked me here [in the Zoom chat] about expanding on the auditing and impact assessment scores, I would say that first and foremost where we end up with auditing technologies that are governing our lives is just one part of the governance process. The risk of focusing on it alone is typically people say, "Okay, we will develop everything, deploy, and then audits will be after the fact," and typically it will be after there has been an adverse outcome. So we just have that as one of the toolkits.

And maybe even having audits before deployment, so before government procurers—an AI-designed system for welfare, for example—how can you make sure that the communities it is supposed to be serving have been involved in the conversation? It is not that they have to be technical experts. It is just that they are experts in their lived experience, and that is first and foremost what should foreground how we think about governance of technologies like AI inasmuch as the technical aspects—what we call the "vertical sector" of AI—is just as important for the same conversations to happen in parallel.

I think if we are going to be talking about equality, governance, coexistence, and cooperation really, understanding that sociopolitical and sociocultural impact is very important when we are discussing the future of AI in our lives.

JAMES COTTRELL: Corinne, the United Nations has done an awful lot around looking at this from a what I will call a "constructive perspective." How do you marry what some of the work that the Foundation has done and that you have seen within the UN system as it relates to what is actually going on in Africa? You have heard, I think, some not insurmountable issues pop, up but clearly issues that have got to be addressed in the broadest kind of context. Where from your experience can we see the light almost in a tunnel, hopefully not a train coming at us, but a beacon that is drawing us toward it?

CORINNE MOMAL-VANIAN: Before I come to that, Chip, maybe I can just comment on something that Warren and Nanjira also mentioned.

I know that we have tended, especially at the Foundation, to talk mostly about digital threats, but we are entirely aware that we should not throw the baby out with the bathwater and that there are many, many benefits that Africa can derive from AI and big data—food security was mentioned, energy efficiency, and health care I think Warren mentioned, all these things—as long as, as Nanjira said, they are properly designed within the right governance frameworks and so on.

So the light at the end of the tunnel may be in the fact that there is definitely awareness on the continent of the need for this framework.

For instance, we had a discussion the other day with European and African experts about whether some of the work that the European Commission is doing at the moment on something which is called the Digital Services Act, which is meant to regulate the digital space, disinformation, and so on, could be replicated or whether Africa could base some of its own regulatory work when it starts it around this. The problem is really everything has to be very context-specific, as Nanjira said.

But there is very high awareness. For the first time, last month the African Commission on Human Rights, which is a subsidiary body of the African Union, in its last session got to the resolution about the impact of AI. When it percolates down to a bureaucracy like this or an institution, it means it is already everywhere. One part of the resolution is they said that African voices must be heard, African governments and other actors must be part of the global conversation on AI. I thought that was very encouraging, that there is definitely great awareness of the need.

The other light at the end of the tunnel for me is that there is also great awareness, coming back to the issue of democracy, of the tech platforms now of their sociopolitical impact.

We talk a lot to Facebook, for instance. They come to us—and many others, I am sure—and they ask us now for our opinion ahead of elections, for instance, to tell them if there is any risk of violence, have we identified a fragile context that they have to be particularly mindful of. Then we tell them, and of course we always cite the issue of the languages that Nanjira mentioned, that you have to take into account that a lot of content is created in local languages, and therefore your moderators must be able to understand these languages and so on.

But they are conscious. They are not, I would say, publicly acknowledging a responsibility, but they are accepting that they have to be part of the solution, and we find that this is really, really encouraging. Certainly we are talking to the Twitters and the Facebooks and so on in Africa in this context.

I will say those two things are encouraging to me: the realization at the government level and intergovernmental structures that there is a need for a regulatory framework of some kind and for also Africa to be part of the more global effort; and then the acceptance by the tech actors of part of their responsibility.

JAMES COTTRELL: It is really interesting to see that engagement, and I am glad to hear that they are coming to you as well as you engaging back with them in a broad series of perspectives.

One of the issues that I have noticed in the 25 years that I have been working in Africa is that opening up that dialogue—and I saw there was a point that came up with e-commerce, and we will talk about that in a few minutes during the Q&A—the challenges that are faced as the rest of the world goes into different kinds of contexts of procurement, different kinds of contexts of consumer and demand and needs, and looking at this from "How do you stop corruption?" as part of that process, which is a huge issue and beating drums.

It occurs to me, because one of the areas that I focused on—and in fact using some of the tools that the WEF had set out through the Partnering Against Corruption Initiative a couple of years ago—is a focus on trying to understand the tribal influence as it relates to setting the tone within different geographical regions. You may have 150 tribal influences in a small country as opposed to the larger context.

Nanjira, I might come back to you and ask you to comment on that, and then I will come back to Warren and then to Corinne. Is it too much of a stretch to think about an equitable approach as it relates to understanding the tribal influences and trying to use that as part of an initiative to make sure that we are getting a better outcome?

NANJIRA SAMBULI: I would hesitate to frame it around tribal because that is so nuanced and has different political connotations, because we could argue every region does have its own tribes.

What is important is indeed the question of governance, even before you add the technologies that distribute the tools and the resources that are available. Here we are talking about technologies intersecting with old-school or analog governance issues, and we know that is a serious political undertaking which is also very strongly linked to how Africa's position in the world has been vis-à-vis aid and impositions, if you will, around how to govern yourselves, mostly half-baked solutions coming from somewhere else or from a strategy paper from somewhere without much context.

The risk when those two things merge—so we have AI systems or emergent technologies coming into these intractable challenges, one with AI specifically—is that there is an idea, almost a fetish, to bring in administrative universality, so that if it is tried out in a small setting, even if it were in Rwanda, that does not necessarily mean that it translates to a small enough country like Burkina Faso.

The technologies are also not being designed in a neutral setting. These are two strains of conversations that do have to happen in lockstep, and that is why when we think about the governance of digital technologies, whether it is AI or even just advancing Internet connectivity, we absolutely have to keep going back to: What is the community that is supposed to be served? How are they being seen in the insights, in the raw material that is being fed through the processes or the decision-making, whether technological or analog, that then give us those outputs? Especially also those have been historically excluded, how do we make sure we do not turbocharge that with these technologies that we are almost a bit too excited to adopt—not because they will not serve the purpose, but they cannot cost-correct alone by themselves for these exclusions and inequalities that exist across all societies in the world?

JAMES COTTRELL: I could unpack that and we could stay on for another five hours just in the last three minutes of your conversation. That is really good. Thank you.

Warren, it occurs to me that the experiment that you and I are working on in Botswana might be able to use some of the same kind of influences that Nanjira and Corinne have been talking about. Why don't you describe a little bit of that and maybe what the implications might be for the broader sets of solution?

WARREN HERO: Absolutely I will do, Chip.

I think philosophically part of what I believe is really important is to understand some of the philosophical groundings. I refer to Walter Truett Anderson who thinks about governments and institutions as linguistic structures. He talks about the fact that even problems that strictly aren't communication, like failures of mechanical systems, can be explored in terms of things said and not said, questions asked and not asked, conversations never begun or left uncompleted, or alternate explanations not discussed.

When I think about universality, one of the things I remember leading up to us considering the issue at the Maun Science Park was to take the conversations that communities were having, and they are called kgotla in our part of the world in Southern Africa, where communities gather under trees and they have conversations. We, through some really simple technology, were able to make sure to record that and use natural language processing to bring that into the legal and legislative process for Botswana.

That ability to listen to conversations and to bring those conversations and then to extract from those conversations insight and understanding, and also to appreciate the context, because of the way these new artificial intelligences learn, allowed us to be able to move away from just having to worry about really big data sets. So I think that is one opportunity.

But the opportunity to link things like the Maun Science Park—and I will just spend a minute on it—is in the context of the country's entire digital transformation. We understand that as a country Botswana has a couple of things going for it—the agrarian economy, tourism, and of course the natural environment.

So the ability to conceptualize a sustainable approach to what sustainable communities could look like and how those sustainable communities could trade with each other and how their trade can then impact on creating grids, specifically energy grids, that are community-based, and then because of the meshed nature of the grids create a more robust electricity supply, make sure that every single person in the community actually participates. The ability then to create a structure out of compacted dirt but to embed the actual structures with IoT devices enables me to trade my surplus water energy with the grids for financial benefit but also to trade within communities.

I think two things. The fact that this starts at the most elementary level, which is a person and their dwelling and the way they participate in the community, and their choice from a privacy perspective or in terms of sharing perspective to make their own choice about whether to or not to participate—that is the power of this. The ability to take these conversations and to make them visible and to get people to participate with them, and not once again to make anecdotal decisions but to make data-driven decisions—I think those are some of the things that we are excited about, specifically about the Maun Science Park project in Botswana.

JAMES COTTRELL: Thanks, Warren.

I have had the pleasure of being involved in this for a period of time. Importantly, this is going to be replicable. Taking it from an analog perspective to a digital perspective and broadening that context throughout Africa is the target to do this, bringing together some of the thought leaders and not necessarily the big corporations from external. In fact, if anything, we are trying to lean so far the other way to make sure that it is balanced amongst all of this.

I will remind everyone that, please, if you have questions, put them in the chat. Alex Woodson is going to help us through this in a few minutes. After this next round of conversations, we will move to 10 to 15 minutes of Q&A for the panelists.

One of the titles of our session today is "Ethics and Equality." What does that mean in this context? We have heard a little bit here. We have talked about this.

Corinne, I am going to come to you first. These are two words that are front and center today in a number of different contexts and in a number of different parts of the world, but what does it mean here as it relates to this?

CORINNE MOMAL-VANIAN: These words are very important in this context because we do not see for now, at least in the political space, AI driving more equality. We see the creation of digital elites—as in any other continent by the way—but there is a growing divide between those who can use AI in particular for their political purposes and those who can't, and civil society not having the same kinds of skills and resources as governments have in the use of AI.

For instance, we have seen use of very sophisticated disinformation campaigns during elections. Oxford University has a project on technology and democracy that has described extensively the use of trolls and social media bots and so on by some governing parties.

That is not something that a lot of actors can master at the moment, so there is really this imbalance and this asymmetry that we have discussed already between those who can and who have very quickly either developed their own skills for political purposes or are resorting to what we call "digital mercenaries" to do their work.

We know that, for instance, a lot of governments in many countries, not only in Africa, are resorting to AI-powered surveillance tools, such as facial recognition and so on, and sometimes they just pay for the services of companies that are based elsewhere.

What we are seeing in terms of equity, equality, and ethics is actually for now a worrying trend in this growing divide by those who can use AI for sometimes nefarious purposes and those who can't.

JAMES COTTRELL: Nanjira, do you agree with Corinne's assessment?

NANJIRA SAMBULI: Yes. It is always when the rubber hits the road in practice that the real theater of what we would put together as recommendations come up. But I like to say let's complicate the equation and sit through that trouble because that is the real meat of what actually happens after we recommend stuff.

I think a lot of our conversations about what to do, especially where technology is involved, just do not accommodate this real messy theater that is our coexistence.

Terminology has been really interesting. How terminology is used in one region may not necessarily apply. A digital mercenary might be somebody else's digital dissident, just as one random example there.

Just before we say, "This is the way forward," I am always a fan of complicating it by sitting with it a bit more and saying: "Well, whose perspective has not been represented in how we are normalizing or popularizing a way of thinking about stuff? How can we go back to that, and how can we create more truly inclusive ways of thinking about how these different issues and technologies as we are designing them and deploying them could actually also mitigate the harms that we always have?"

We have to get to a point where we don't always have these typical tropes of, "Oh, regulation slows down innovation" or "We only think about ethics after the fact that a harm has been meted out on people," especially because this pandemic has made technology so intimate in our lives.

JAMES COTTRELL: For sure.

All right, Warren.

WARREN HERO: I think for me always the place to start is provocative questions. When you think about aspects, when I think about AI design, there are a couple of things that I think about. I think about validation. I think about security. I think about verification. I think about control.

Under validation, this issue of: "Did I design the right system? Is it based on the right assumptions? How does it actually improve the situation for all stakeholders?" I think this is part of the opportunities. When we take a multi-stakeholder view of both the supply side and the demand side, it gives us a better way to start thinking about balancing some of these inequalities and systems of verification.

I think the other aspect is to understand that this is only the tip of the iceberg and that it is going to get more complex. The flywheel is just going to spin faster. Right now we are talking about AI, and when we overlay it with things like quantum computing, what quantum computing will do to an algorithmic organization, it starts creating even further imbalances.

The tools that we have to use are the tools we humans use—plans and conversations—and the issue with the planning is the ability to think in multiple time horizons. I often find that there is an imbalance between exploration and exploitation, and when we appreciate the fact that we can think about both the explorative aspects and the exploitative aspects then those multiple time horizons give us a completely different view.

The thing that we know from a strategic perspective is that most organizations, because of not having enough appreciation of their life stage, always under-explore or overexploit, and all of those lead to pathologies.

That ability to appreciate your context—and once again, it is about being able to have sensing mechanisms—and that ability to have sensing mechanisms and to consistently reflect on those sensing mechanisms are the things that ground our planning in the reality.

Also, when we think from a planning perspective, move away from single-trajectory planning. The use of scenario planning or scenario thinking, the use of things like design thinking, all force us away from the convergent part of the thinking process and force us to the divergent part of the thinking process.

I think where we will find a lot of value is in the diversity and in the inclusion. A friend of mine always says: "Diversity is being asked to the dance. Inclusion is actually dancing." I think what we have to do is we have to create the opportunities both to be invited to the dance and then actually perform the dance.

JAMES COTTRELL: That is a good illustration to jump into the next stage. I have heard you mention that once before and I have used it. You should probably copyright that conversation.

Alex, I see that there have been some questions that have popped up here. In the last 10 minutes or so that we have can you walk us through some of the questions? We will go around and the panelists and I will jump in as appropriate to answer.

ALEX WOODSON: I will ask two questions from Christina Colclough that have come through. She actually did an event with us in November.

I see Wendell Wallach, the co-director of the AI Equality Initiative at Carnegie Council, just asked a question, so I will ask that after Christina's questions.

First question: "How will the e-commerce discussions happening on the fringes of the World Trade Organization affect African countries' abilities to build and shape their own digital transformations?"

Second question: "On digital inequalities, how best can the global community support and empower African nations in their digital transformation? What should happen that isn't? What should stop that is?"

JAMES COTTRELL: Who wants to jump in there first? Nanjira, you clearly have a reaction to this.

NANJIRA SAMBULI: It goes back to the theater I mentioned, doesn't it, because again antecedent global governance mechanisms will impact how digital transformation happens?

I think that is actually one interesting area where we might have more engagement from African policymakers because all matters digital and trade are coming up for discussion, and they have a bit more muscle with negotiating some equity there, so that is a very useful avenue for everybody who is trying to think about how digital economies here in Africa will shape up, should come up.

Another area, in fact, is with Africa trying to set up a continental free trade area and down the line there has been talk about a digital single market, whether it will threaten others for whom, politically speaking, a continent like this one getting its act together might not bode well for them. These are the real theaters around which these questions come up.

Actually, to Christina's question, the World Trade Organization is a very useful lens to see the continued discontinuities around getting these things as we go from recommendation to actually implementing them.

JAMES COTTRELL: That was a good question, Christina.

Corinne?

CORINNE MOMAL-VANIAN: I don't think anybody has lessons to give to Africans as they tackle these questions.

I do think that, for instance, the European Union has become a kind of regulatory superpower as seen by many. The Americans, for instance, are very much looking at what the Europeans are doing in the field of the Digital Services Act to see whether they can get their act together themselves in tackling the difficult balance between free speech and disinformation.

In the same way it is very interesting to me. I was listening to a European expert the other day, and she was saying that during the whole process of preparing the Digital Services Act and so on they have taken into account the fact that other regions may want to look at it. They have looked at the Brussels lens and the fact that others may be looking at Brussels to show the way.

One thing which they were very conscious of and that really influenced the way they drafted the Act being considered now by the European Parliament is that they have refrained from asking for too much regulation because they thought that might give an excuse for some in other regions to stifle dissent, frankly, and to use regulation as a way to kill free speech.

I think the African stakeholders have all the tools they need to address the issues, but there are other examples that can be looked at usefully.

The only global action that could be very useful is how we tackle the accountability of tech actors. When I was talking about the imbalance of powers, what is true in America, for instance, is the fact that the Big Tech companies have such gigantic resources that some governments just cannot go against them financially. It is even truer when the resources of the governments are much smaller. I think that is an issue that should be addressed globally because individually each government is not going to be able to address it.

The fact is that, as we know, the assets of many of these tech companies are larger than the GDPs of many, many countries. That creates an imbalance that is really something that has to be addressed globally, frankly.

Even if there is a regulatory framework, a solid one, that is put in place, for instance, at the level of the African continent, if governments cannot hold companies accountable for the implementation of this framework, what purpose does it serve? I think that is a really important issue, Chip.

JAMES COTTRELL: I agree with you.

Warren.

WARREN HERO: I will go and geek out, and I would say to you cybersecurity, because the issue is that all of this stuff is built on security, and so the ability to be able to gather data and telemetry and set up in African countries sensing centers so that we can prevent the threats essentially from propagating I think is one of the definite things.

I also think Microsoft, Amazon, and Google were talking about the concept of a Digital Geneva Convention specifically around cybersecurity to prevent citizens being targeted by nation-state actors. I think those things would be practical.

From a security point of view, if you don't have the requisite security in place, you cannot ensure people's privacy, and therefore trust is unlikely to develop for these methods of transacting.

And then of course, the ability then to ensure a vibrant market, as Nanjira talked about, what is happening in Africa. If we can't protect intellectual property, then on what basis do we create markets?

JAMES COTTRELL: Right. Really valid. That was a great question, Christina. Thank you very much.

ALEX WOODSON: Two questions from Wendell Wallach: "Warren, can you give us an example where the potential exploitative dimension of an implementation was recognized early on during the explorative stage? What happened?"

And: "Much of the discussion has been focused upon functioning governments. How do you perceive the impact of the tech revolution in Africa on failed states?"

WARREN HERO: I will use the example of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) from a gene therapy perspective, where you have so many entities exploring the benefits of specifically the editing. But some things are really going wrong with gene therapy in certain cases where, from an accessibility point of view, we almost went straight from exploration to monetization, and the price point for the access to these gene therapies that can save lives, especially for some really difficult-to-treat cancers, becomes a big issue.

I think the other aspect is around health and health IoT and the way that is being used. In the South African sense, we have deployed and used artificial intelligence with IoT to be able to, for instance, talk about the preventable causes of death and to make sure that we can keep individuals triaged to the primary healthcare sector, and that is showing some definitive benefits.

So those issues specifically around technology and the way it is used and the way we then monetize it and the way we then think about the intellectual property protections, specifically around CRISPR, is an example that I can use from the health sector.

Of course, one of the fundamental things in the digital context is where the digital economies work is where you can blur the line between the digital and the physical, especially as we think about the use of data to understand when cows are able create calves, which has an impact on herd yield. Therefore, potentially we can increase herd yield without a concomitant increase in input costs, but then of course we have to think about the greenhouse gases and what goes on in that regard.

The thing that I would say to you is one of the biggest things possibly in this regard is to think about synthesis because a digital context is only valuable when we can synthesize all of these aspects and bring them together so that they can illuminate relationships in the ecosystem that previously we did not see.

JAMES COTTRELL: Thanks, Warren.

I am going to turn the second part of this question, which is: "How do you perceive the impact of the tech revolution in Africa on failed states?" Corinne, I'll run to you first, and then Nanjira.

CORINNE MOMAL-VANIAN: I don't really like the term "failed state" because I don't know how people define it.

We talked earlier about the fact that the speed of change was happening the fastest in very, very fragile contexts, some countries which have just emerged from long deadly conflicts, like South Sudan and Somalia, and where Internet penetration is happening in different ways at the moment.

But let me take, in fact, an example outside of the African continent, in Lebanon, where a lot of people are saying that Lebanon is actually a failed state nowadays. It was a middle-income country doing relatively well on many measures, but it has in the last five years really fallen on all measures of wealth, development, and so on.

When the explosion in the Port of Beirut happened [last year], the government didn't do anything to provide rescue or shelter or to provide assistance to the hundreds of people who were homeless and affected by this dreadful thing.

It was digitally savvy young people who managed to provide services. This is a population that was actually very connected and where they have a high level of digital literacy, and they were able to provide services that the government was not able to. For instance, two teenagers put together a website and a map to enable people to find each other because lots of people were lost, family members and so on. They did what the government should have done. That was an example for me.

You can see it both ways. In some ways, yes, AI and digital technologies are sometimes arriving in very fragile contexts where the strong institutions needed to make sure that the benefits are drawn but the threats are minimized don't exist, but in other contexts digitalization, AI, and other related technologies can help citizens access services that they cannot because the government cannot deliver them.

JAMES COTTRELL: I really like that analogy, Corinne. Thank you. Having been engaged in that to not an extensive extent but enough to understand, and the value that was literally attributed to help save lives, and that is a difference.

Nanjira, you get the last word.

NANJIRA SAMBULI: I will just sum it up by saying good technology in the hands of bad actors equals bad outcomes. Whether failed, authoritarian, or democratic, the intention, the intrinsic motivation of why technologies are being applied, is very important. Here we have seen, for example, with facial recognition, even in so-called functioning governments, how adverse that outcome has gone.

Information control is right now the biggest question for governments that want to retain power, and especially governments that are sensing an illegitimacy from their citizens. These tools can help them oppress their people regardless of what country in the world you are in today.

But at the same time, these are the tools that in the hands of people who are just trying to get the services that are due to them could also be very encouraging tools.

It does take us back to the question of: Could we go back to decentralized, bottom-up governance? It is too bad that the liberal world order has been around this idea of the individual and rational actors. We are not always rational as human beings and we need to acknowledge that.

Even as we are designing good technologies, the conversation we need to have about artificial intelligence specifically is both about the harms and risks, not just as they might happen down the line but as they are happening today across the board, without trying to just see if this is an anomaly or an aberration in a more democratically inclined government versus it is effective in—I don't know whatever terminology we want to use.

We really need to start having less tech-deterministic and solutionistic conversations and really bake these realities, else we are going to turbocharge harms inadvertently because we do not have intellectual and morally honest conversations.

JAMES COTTRELL: Great way to end the conversation today.

I know that there were other questions that popped up and I am sorry that we could not get to them today.

I hope this will mean that there is enough interest that you will come to our next discussion around equitable deployment of AI in Africa. If you have ideas or thoughts, please do not hesitate to reach out to the Carnegie Council.

On behalf of the AI and Equality Initiative and also the Carnegie Council, thank you for joining us today.

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

JAN 23, 2024 Podcast

When Science Meets Power, with Geoff Mulgan

In this special episode, Senior Fellow Anja Kaspersen speaks with University College London's Professor Geoff Mulgan on the trends shaping technology's impact on society.