Procuring & Embedding AI Systems in the Public Sector, with Rumman Chowdhury & Mona Sloane

Oct 6, 2021

In this episode of the "Artificial Intelligence & Equality Initiative" podcast, Senior Fellows Anja Kaspersen and Wendell Wallach are joined by Mona Sloane, research scientist and adjunct professor at New York University, and Rumman Chowdhury, Twitter's director of machine learning ethics, transparency and accountability, to discuss their recent online resource aiprocurement.org. The conversation addresses key tension points and narratives impacting how AI systems are procured and embedded in the public sector.

ANJA KASPERSEN: In this episode Wendell and I are joined by two wonderful and incredibly gifted practitioners, Dr. Mona Sloane and Dr. Rumman Chowdhury. Mona is a sociologist working on inequality in the context of artificial intelligence (AI) design and policy. She is a fellow with New York University's Institute for Public Knowledge and an adjunct professor in the Department of Technology, Culture, and Society at the NYU's Tandon School of Engineering. Dr. Rumman Chowdhury is a social scientist whose passion lies at the intersection of artificial intelligence and humanity. She is currently the director of the Machine Learning Ethics, Transparency and Accountability Team at Twitter.

Mona and Rumman, when we set up this podcast I was really encouraged and excited about this new online resource that you have created called AIProcurement.org. This is meant to serve as a resource for technologists, public servants, researchers, community advocates, and policymakers to assess and address issues around artificial intelligence, risks more generally, and government procurement processes as AI systems are increasingly being deployed and embedded in the public sector. Could you tell us more about this project and how it came into being?

MONA SLOANE: Thank you so much for having us. It's a joy to be here, and it's always wonderful to share a stage, albeit virtual, with Rumman, and talk about our work.

The procurement project came about through conversations that we had between the two of us, kind of thinking about or really observing the acceleration of adoption of automated decision systems in the public sector, particularly in the pandemic. We had conversations about the risks that are associated with that, and Rumman had been doing some thinking and work on that prior to that, and we got together and thought, Hey, how can we actually kick off a conversation and bring together maybe experts who share experiences, have common questions, and identify what's going on and map the field a little bit together? That was the spirit in which we got together and started to think about how to create the tool as you describe it.

We essentially ended up receiving a grant to bring together a group of about 30 experts from different fields who can bring different perspectives to questions around equity, procurement, and artificial intelligence. We got them together in three roundtables that had different focal points, and what you see now online is really a combination of the research and work that Rumman and I have done together plus what came out of these conversations.

I am going to toss it over to Rumman to add to that.

RUMMAN CHOWDHURY: I want to talk a little bit about these groups that we got together and the intent of it. Mona correctly identified that what we observed, even before the pandemic, was this information and skills gap in most government sectors, where there are definitely needs to be met for the public, but they do not necessarily have the people to be able to build and scale technology to address those needs. Traditionally those are met through your regular procurement processes—identify a vendor, etc.

The problem with artificial intelligence and a lot of emerging tech is that it is very, very different from getting a vendor contract for software or for buying computers, widgets, or whatever. Putting something like automated decision-making systems through the traditional procurement process seemed to be insufficient. We wanted also to verify and we wanted to learn from the community, and we wanted to build it in a way that was capturing a variety of voices but also ultimately producing something that is useful and usable.

Anja, I think that is really what you're talking about. At the end of the day, we didn't want to write a report, yet another very long document. We actually wanted to give people actionable items and things they could do.

ANJA KASPERSEN: In your report you make this very interesting distinction between public use technology and public interest technology. Could you shed some light on that distinction?

MONA SLOANE: Public interest technology very broadly is technology that serves the public interest. The distinction that we tried to make as a basis for thinking about procurement is that there is a difference in terms of, for example, compliance standards in public use technology versus private use technology. Public use technologies are subject to very different legal criteria than most private use technologies, and that is really a challenge when we add on the characteristics and dynamics of automated decision systems, which we haven't quite figured out for the private sector, either.

So there are real challenges, but also at the same time—and this is one of the reasons we were so happy with how the project developed was that procurement is quite powerful in thinking about what standards should be set, and procurement can also be a space for innovation for thinking about what kinds of, for example, auditing standards, transparency standards, and accountability standards we need to bake into the procurement process, and if they are baked into the public procurement process, they might "trickle down" into the private sector.

ANJA KASPERSEN: In your report you also speak about these six "tension points" that challenge public interest acquisition processes and embedding AI technologies into governance systems and processes. Could you tell us more about these six tension points?

RUMMAN CHOWDHURY: There are three parts to this paper. High-level it's tension points. In other words, here are the areas in which when we try to transfer technology that is often built for private use into a public setting, these are the areas in which we get some friction. Second is narrative traps, so things we tend to say that actually are harmful, missing the point, or can lead us into the wrong direction, and specifically calls to action. Our six tension points are:

1) Definition. First of all, what is an algorithm? What is AI? What is it doing? What is appropriate and inappropriate use? Making sure that we are aligned on what we are all talking about, rather than kind of assuming we all know what entails an automated decision-making system. My personal sticking point definition is this term "audit." For example, there is a lot of legislation regulation talking about audits and mandating audits, but nowhere do we actually define what an audit means except for some guidance within the EU Regulation that has come out. In the absence of good guidance on what an audit is, what it leads to is everybody is creating their own definition of what an audit is, which is more dangerous.

2) Process. Identifying what the process is that exists today, what it is able to capture but also what it is not able to capture. What we don't want are systems that are at odds with each other, conflicting, and extra bureaucracy. What we do want is to identify where there are gaps and address them.

3) Incentives. Why are people doing this? What sort of incentives and resources do people within the government sector have? What motivates them to do their job? And what does it mean if you are a start-up or a major company and you're trying to work with the federal government or some sort of public entity? So it is really helpful for folks to understand what the incentives are of all parties and also how it is they want to build their technology to match the incentives that they are trying to achieve.

4) Institutional structure, so how government is built. It is actually often operating at a timeframe that is very, very different from a for-profit company, and this is often why we see that start-ups don't want to go into the government sector and work with government. They have a difficult time versus the typical actors, and there is a reason for that. If it takes a year to even get a contract in place, another year to get resourcing and staffing, and another year to get on someone's road map, for a start-up that needs to move a lot faster that may not be the best use of your time, even if it could be a very profitable venture. So, thinking through how different groups move differently and have different timeframes and again different levels of bureaucracy.

5) Technology infrastructure. I think one of the hardest things for the private sector to understand about the public sector is that the public sector does not always have a massive wealth of technological infrastructure. You may be working with systems that are older and outdated and you may be working within the confines of the way things have existed traditionally. You can't always accelerate something to the 21st century overnight because that causes a lot of problems, and the reason it does—going back to your question, Anja, about what is technology for public use—technology for the public use needs to be built for 100 percent of the citizens and 100 percent of recipients. That is actually not how most companies are built. Most products are built to tailor to a particular market, so I can say, "I built a product for women in tech who are over 35," and I focus on that market, and I don't care about anybody else.

But that's actually not how things need to work in the public sector. If I took that same technology and applied to the public sector, I would have to worry about every other income group, age group, background, ethnicity, and demographic, and that is very, very different when you think about how quickly we can innovate and how careful we have to be about who we are bringing with us.

6) Liability. When we are thinking about procurement innovation, we are not devoid of liabilities and legal infrastructure. Some of the most interesting cases of organizations or governments in this case being found responsible for adverse outcomes of the use of technology has been because we have certain rights and protections as citizens in the United States. We have things like due process, where we have a right to understand why decisions are made. We do not necessarily have those rights in the private sector, so when we think about utilizing private sector technology in the public sphere, we also need to be aware of what liabilities may exist because it's not just easily transferrable.

ANJA KASPERSEN: Do you want to continue, Mona?

MONA SLOANE: Yes. These were the six tension points that we identified in the project. The reason why we chose tension points I just want to say real quick is because as I said in the beginning we really want people to take these tension points into their organizations, into their own roles, and into their own work, and consider them from their professional perspective and point of view and think about how they can work them and maybe how they can resolve them within the context of the problems that they are working on, which are very different. If you work on public health and hygiene in the city, for example, that is a completely different set of problems that you are facing versus education, transport, or food security. That's the spirit of the tension points.

We then also have the narrative traps, and they are one my favorite parts of the project. The reason for choosing narrative traps is that we want to be mindful of the kind of pitfalls of the stories that we tell each other as we do our work and engage professionally with these topics. They are warning signs or alarm bells that should go off every now and then.

The first one we have is—and this may sound controversial initially—that we must engage the public. This is a narrative trap. This is a narrative trap because just broadly proclaiming that we must engage the public really deliberately defuses what engagement really means. Does it refer to a democratic process? Does is refer to citizen participation and administrative decision-making processes? Does it refer to public oversight or anything else? Again, engagement can be very domain-specific where there is public engagement. Public space design can be very different to other kinds of public engagement. But what happens when we leave that undefined is that communities will find it very difficult to demand it, so we kind of make it difficult or impossible to ask for engagement because there is no clear definition and there are no clear pathways to ask for public engagement.

The other problem with that narrative is that it really leaves unclear what "the public" means. That ignores communities who have historically been excluded from the frame of the public. So, these are the two big problems with the narrative "We must engage the public," and again, this is something that we hope that people will take into their work and address.

The second narrative trap is oversimplified definitions. Narratives or calls for simplifying definitions to very complex problems, contexts, and issues around artificial intelligence and their deployment, or the deployment of very monolithic framings such as we just spoke about it, the public, but also government, bias, the algorithm—Rumman just spoke about this—deploying these monolithic framings excludes nuance, and that really is a narrative that can then prevent us from engaging with the individual context that can be ever so important, and that in turn can uphold historic power structures because we don't see that engagement. When we talk about generalizing bias it might actually be more useful to talk about what kinds of harms are occurring or what impact is occurring. That is the second one.

The third one is that the main threat generally is of the government use of data and that we must be very careful when we give data to the government or to public entities. There is a narrative that this threat is bigger than actually private data use. However, in many, many contexts, governments and government agencies do not even possess data archives that are remotely as big as the data archives of industry. They also don't have the capacity to collect, clean, or analyze the data. They also have much stricter rules when it comes to the collection or the processing of that data. So they have a very limited ability to use data effectively where it would actually be needed. So when we say that government use of data is the biggest threat, that is a trap because it really diverts regulatory attention and scrutiny away from where we need it, which is the private sector.

The fourth one is that we only need one incentive to make things better, the kind of silver bullet approach, which implies that there is one solution to a multitude of complex and emerging problems that can occur in the context of AI and procurement. The one-incentive idea kind of perpetuates the idea that goal alignment and compliance can be achieved through—for example, one incentive is avoiding fines, and that strategy ignores that there are larger systems—capitalist or bureaucratic systems and incentives—that are at play for different actors that may override that single incentive, and I think we can see that quite nicely currently in the context of the European Commission throwing around a bunch of plans at the tech industry.

The last one is—and this is one for us as people who work on procurement in the context of the project but also folks who work on procurement more broadly—the narrative trap that also procurement can be a silver bullet. Narratives that procurement can solve any and all issues related to government use of AI systems really are problematic and promise innovation and change in a way that cannot be delivered, especially as we talk about procurement that is different across a number of different government agencies. It also ignores the fundamental fact that it is really difficult to change and improve the procurement process iteratively in order to address the emergent nature of risk. So we have a little bit of a time problem almost here.

These are our narrative traps that we came up with.

ANJA KASPERSEN: It is very interesting, and I know that you have some calls for concrete actions to be taken to overcome some of these narrative traps and also to address the tension points. One big issue when you talk about the public procurement process is, do we have the right type of AI fluency among the people who have been given responsibilities to look at it?

RUMMAN CHOWDHURY: You are totally spot-on about the cultivating talent, and that is actually one of our calls to action, that there is this need to cultivate talent and build the right skill sets. There is a lot of focus definitely in the United States on getting more public interest technologists into the government sector, and we also have a lot more hiring of data scientists happening, which is really wonderful, and that probably has to go hand-in-hand with some degree of education for lawmakers, for people who are making critical decisions. Everyone doesn't need to be trained as a data scientist, but everyone who touches this information or these technologies does need to be in some aspect knowledgeable and aware of what's happening.

There are a couple of great use cases. One that I want to point to that launched very recently is the New York City AI Primer. Similarly, the New York City Office of the Chief Privacy Officer has written a report on what is AI and what does it mean for the city of New York.

MONA SLOANE: If I may add to that real quick, we are also seeing, which is exciting—and I say this as an educator, as a social scientist who teaches engineers—a massive interest among the new generation of technologists to get involved in issues around public interest technology, around responsible technology, ethical use, and that is not just in the context of the public sector but also the private sector. So we see a big sort of growth of those type of roles in the private sector. Rumman is obviously spearheading this at Twitter and is growing a team there. We see this in other places as well.

What I also want to mention is that one of the outcomes of the project is an Institute of Electrical and Electronic Engineers Working Group that is working on a new standard for AI procurement, which is also very exciting. So we are seeing a lot of movement in this space, and I would actually expect that late this year or early next year we are going to see some very concrete steps that agencies and organizations are going to take.

ANJA KASPERSEN: Thank you, Mona.

I want to zoom out a little bit. In one of the calls for action you talk about "creating meaningful transparency."

I know that this is an issue that is very close to your heart, Rumman, and I heard you speak in a different podcast about how transparency is a political choice. We talk a lot about the technical feasibilities of ensuring transparency, but I have rarely heard someone being so concrete on calling out decision-makers and saying: "It is not just about technical specifications. It is about making a political choice around transparency." Can you speak more to that?

RUMMAN CHOWDHURY: Absolutely. I am so glad you are bringing that up. The way technology is built today and the way a lot of these systems work is it works to centralize power. Even if you think of the construct of most automated decision-making systems, there is some sort of centralized data set and that data set is linked to some algorithm, that model makes decisions, and those decisions are basically bestowed upon us. In a nutshell that is how that works, and as you build and scale your technology it moves upwards, as in there is a fundamental tension between let's say community engagement, input, or even being transparent with this need to centralize, grow, and scale, and that is a tension that we need to deal with when we think about public use and public interest technologies because it is almost at odds with the need for transparency.

It was actually a really wonderful conversation we had about what transparency means, and some of the most poignant conversation came from Fabian Rogers, who is now working in the state of New York. He made his mark as one of the residents of the Atlantic Plaza Towers, where their landlords were trying to put in facial-recognition technology to watch the residents. The race and income dynamic was not lost on anybody, and he had some really poignant things to say about the value of transparency and what it means. Transparency without accountability is useless. One can be an all-powerful entity and be fully transparent, but if nobody can do anything about what you're doing, there's no point.

The second is that what it means to communicate and be transparent—this was one of my critiques of the General Data Protection Regulation—we talk about transparency, but it's not necessarily clear how one is meant to be transparent and how we can communicate effectively to different populations so that they can make smart and informed decisions.

WENDELL WALLACH: Transparency has had two meanings in the context of AI. One has been about the transparency of the institutions and how they are using the data, what their intentions are, and their being clear about that. I gather that is primarily what you're talking about when you're saying transparency is political, or do you mean to extend that to the other definition of transparency, which has much more to do with whether we can understand what the algorithms are actually doing between input and output?

RUMMAN CHOWDHURY: I would say both. Decision-making does need to be transparent, especially in the public sector, and I have given the example of due process, where we do actually have a right to understand in the United States how decisions are being made.

But you are also correct. There is the "black box" transparency or transparency in automated decision-making systems, and that's the one in which we have to be careful how we communicate the information and what information is useful. Is it valuable information if I am sharing with the public what my activation functions are and what the key values are? No. Most people do not understand what that means. Meaningful transparency is actually more about the latter sometimes than it is about the former, and I think methods of redress are actually often about the former and less about the latter, but thank you for making that distinction.

ANJA KASPERSEN: Going back to the report, there is a very interesting section under what you call "liabilities." I think for every advanced system we see that the use and role of vendors is growing in importance, but we don't necessarily have the systems in place for making sure—as Rumman was just talking about—that the right levels of transparency are in place or having credible and good sustainable AI procurement processes.

RUMMAN CHOWDHURY: Yes. A lot of my interest in the procurement process is actually born out of work I saw at Accenture, where a lot of companies—this is not even public entities; this is just companies in general—that are not native tech companies will often use vendors, third-party companies, and there is this basic assumption there that they are sort of shielded from liability, but they're not. So the algorithms that your third-party vendor may use which may lead to bias in your systems, that is what you are responsible for.

Vendors present their own special class of complexity because: 1) you are not in ownership of, or even have a lot of visibility into, the technology that is being built; and 2) the amount you can even investigate can sometimes be limited. This is the classic conversation of "how do we protect intellectual property and also perform audits responsibly and correctly." Those are the things we think of when we think about third-party vendors.

Actually, a third one would be to go back to some of our tension points—incentives. What does the vendor want ultimately for their company and their organization, and what are you trying to accomplish by utilizing their services? Those two things may be at odds with each other.

And, as Mona pointed out with the narrative traps, the answer is not, "Oh, let's all align on the shared agreement." It is just a careful recognition that the vendor may be taking certain actions with the data that is being provided to them or with their direction of growth or their product market fit that may not be aligned with what you may want for your organization, and it is helpful to know those limitations so you make smart decisions for yourself.

ANJA KASPERSEN: Mona, I am very interested in learning more how you use—the underlying assumption in this report that governments will be acquiring and embedding AI systems. One of the big dangers, of course, with doing it is that you adopt a type of optimization mindset that might be suitable in a commercial setting but that may not be suitable in a public governance setting. As you said, there are certain types of processes where you simply don't want to see an algorithm making decisions. What's your view on this, and was this something that featured in your discussions and within your groups, the danger of assuming that everything in governance can be optimized?

MONA SLOANE: Thank you so much for that question. That is a very important one.

I think that the—and I am going to be a little bit provocative here, taking your cue—threat of the solidification of a culture of optimization is there, but it depends what we mean by it. Of course, as we see big challenges coming our way or that are there already—the global public health emergency that we are seeing, climate emergency, and all of these things which will require our public agencies to react fast—we want to optimize certain processes. We want innovation. We want to move forward.

What we do not want is a kind of uninformed shopping spree of automated decision systems that then become that infrastructure that Rumman talked about which we then cannot take down and which then creates maybe a culture that is okay with systemic harm, a culture that is okay with intransparency, a culture that is okay with privatized infrastructures. I think that is more the threat than just a culture of optimization in terms of we don't want to innovate. That is one thing.

The other thing I want to mention—and we discussed this at length and this has also been really a big topic in the whole community of critical tech scholars and also activists—is the question of how we define the problem that we want to solve with technology—or maybe not with technology. What is actually a problem? Who articulates that there is a problem? Who gets a say in problem definition? And who actually gets to say that we need any kind of technology to solve that problem?

So maybe we need to start there, very, very early in the process. I am not the first person who says this. This has been said over and over again by many different people. I think those are the two elements that we need to consider when we discuss a threat of optimization.

ANJA KASPERSEN: In your recent podcast—I believe it was made last year with Carnegie Council—you spoke about how you were worried that increased collection and sharing of health data could possibly be turned into tools of oppression if we are not mindful about all these pitfalls and tension points and even narrative traps that you have been describing in this resource that you created.

Looking back now more than a year later with the pandemic still playing out and looking at how AI is being embedded in more and more systems—sometimes as you said misguided by this search for optimization without having clearly defined what problem you are trying to solve or even making the mistake of treating correlations of data as causation and consequences—what is your view on this now?

MONA SLOANE: That is a big question. Thank you for that. And I would love to hear Rumman's thoughts on that as well. I can already see her thinking and smiling.

I think we ended up in a situation where all sorts of data was collected left, right, and center, whether that was health data or data that was collected due to the fact that those of us who were fortunate enough to be able to shelter in place at home had to work remotely, and every interaction happened through the computer, which always creates a whole lot of data.

I can maybe come at this more concretely as an educator. Having taught for 18 months completely remotely and online, there is a trove of data that was collected through that entirely virtual interaction that has changed how universities see engagement with students but also student monitoring and student surveillance. There are very worrying developments in that space. We see reports on that proctoring technology that has AI in it having racial bias, for example. We see a normalization of surveillance of students into their private spaces but also on campus. If I was to be pessimistic, it actually was worse than I thought it would be because at the time we only spoke about health data, but we maybe need to talk about everything that has the normalization of data collection and the change of practices in terms of analyzing that data and how that has developed over the last 18 months.

I am just going to end on this one note. I saw a tweet two days ago which went viral, where somebody said: "Just a reminder. Data that was collected on 'remote' working over the past 18 months is not data on remote working because it wasn't remote working; it was working during a pandemic." So the question then is: How do we even analyze data, even if we're going to analyze it "for good."

I am going to end on that and toss it over to Rumman maybe.

RUMMAN CHOWDHURY: I do want to touch on what you talked about in terms of ed tech and surveillance. If I were to say there is one use case of technology in essentially the public/private sector, but let's say we consider education to be a public good, that has gone wrong, it is ed tech.

What is interesting is that ed tech is what got me into AI years ago. I was a data scientist and an educator, and I felt like the possibilities of ed tech were amazing. It seemed to be possible to solve lots of problems, make education available for everybody, customize it to an individual student's needs, and teach people essentially in a Montessori style but using the technology and elevate teachers. Instead what it has become is a punitive surveillance state, and it makes me genuinely sad to see that a lot of the worst uses of technology, whether it is emotion detection, which is not technology that works, or making broad-based assumptions on what it means for a student to be "engaged," so forcing people—whether instructors or students—into abnormal environments where they have to engage for hours and hours on end without any real appreciation of the human condition and how human beings interact. It's a bellwether. It's a leading indicator. These are some of the things we should not do.

I don't want to end on a depressing note, but I do think there is promise and potential because the systems that we build today are not set in stone. If the pandemic has taught us anything, it is that we can reboot and reset. There are so many things that we do today that have become normalized very, very quickly, as Mona has mentioned, but I will put a positive spin on it. We didn't think it was possible for a lot of jobs to be done remotely, for us to be able to do things like education or our jobs, etc., in a way that integrated our home lives. I think we are all working through what that means and what that looks like, what a work/life balance is, but I think there are some positive externalities that may come out of this if we take the information that we have here, apply it, learn, and we are thoughtful about what we are learning and how we are using it.

ANJA KASPERSEN: Following on from Rumman's comments on externalities speaking in our favor if we are being mindful about it, but what are the potential pitfalls that come from not applying that mindfulness that we need or even the knowledge that we need to be mindful?

MONA SLOANE: I am going to respond to that from a sociological point of view, and for me that means always looking at the social practices that make up our everyday life.

When we look at social practice, meanings play a very important role, and what I am worried about—as Rumman has alluded to and as this whole conversation has focused on a little bit—is that the meaning of digital interaction or just work, as you just said, includes giving up fundamental rights such as when it comes to, for example, privacy. We can talk about civil rights in that context because we always need to talk about discrimination and that we have become okay with that in a global state of emergency that we found ourselves in, but that has become solidified and now has formed this meaning infrastructure that we keep running with, or running on almost.

So what I am worried about is that maybe we are in a moment when we really, really depend on lawmakers, advocacy groups, and the public to protect us with a view for the future and maybe that we are also in a moment where there are not resources necessarily available for those organizations to do that lift financially but also educationally. The knowledge isn't there.

So it's kind of a moment where there is real dependence—and this is where I am coming full-circle to procurement—and this is where I hope that we can see the red lights flashing and get together and put some guardrails in place and throw resources behind education and infrastructure, not just for kids—I tell you, the kids know, maybe more than we know—but for the public because if we want to believably claim that we live in a democratic system which these technologies get embedded into, then we need to make sure people know about them. So I think actually that the corporate sector has the means to throw money behind AI education, and I wish we would see that a little more.

WENDELL WALLACH: This has been a wonderful discussion and introduction of your joint report followed up by this discussion about online digitization and what that has meant, particularly in terms of healthcare and people's consent.

I am wondering before we lose you, though, if I can ask each of you a question that goes to your broader work and how your broader work plays specifically into our concerns at the AI & Equality Initiative.

Let me start with you, Mona, because you have been concerned about issues of equality for a long time in the AI sector. As I understand it, those issues go well beyond bias, but you do seem to be concerned about how AI is exacerbating structural inequalities and what moves we might be able to do to mitigate those, so I would appreciate your just sharing some of your thoughts on that with us and perhaps go so far as to let us know whether you are a bit pessimistic or whether you are heartened by recent activities.

MONA SLOANE: Thank you very much, Wendell, of course.

My frame for looking at artificial intelligence is a design inequality frame. That is what I have been working on for almost 15 years now, and what I am really interested in is understanding how design and technology design specifically and inequality intersect as social superstructures that work at a macro level but also micro practices that work on the ground and how these are related to one another.

The reason why I have been so interested in artificial intelligence and so interested in looking at artificial intelligence as a very important case of design inequality is because artificial intelligence systems are scaling technologies. They are all about increasing efficiencies and really exponentially increasing impact and affecting more and more people regardless of whether the social context is hiring, manufacturing, or education, as we just spoke about. That is the backdrop.

I am always the optimist. I think it is extremely important and much needed these days just to get up in the morning and go to work. There are many wonderful things happening in the research space, in the advocacy space, and in the policy space specifically with these issues. In higher education, in the academy we are seeing a lot of very fruitful exchange.

We also see interesting things happening when it comes to the technologies themselves, and I wanted to add that on to the ed tech conversation that we just had. One positive example that I can report on is that Zoom teaching has taught me to speak in full sentences, although maybe today wasn't the best example.

WENDELL WALLACH: You've done an excellent job.

MONA SLOANE: Thank you, because that is an accessibility issue because we have students who need the closed captions, the live transcription, which is a natural language-processing system, which I always switch on when I teach remotely. So baking that consideration into the way in which I teach and use technologies to increase accessibility, including AI technology, has been really exciting.

The other thing I want to mention is that there are a couple of things that I work on—some of them with Rumman—and one of them is on thinking concretely about holistic auditing strategies for artificial intelligence systems, specifically in the space of hiring. How can we go beyond just checking if the system does what it says it does but actually locate it in a traditional thought, which sometimes in the context of automated hiring systems can be eugenics?

Procurement is another project. I work with journalism professor Hilke Schellmann, and Julia Stoyanovich, who was the data scientist on a tool for helping journalists go through Freedom of Information Act material quickly so that we can hold the "powerful" accountable and find smoking guns when it comes to AI systems that are used in the public sector.

I am very interested in thinking about how we can create a public pedagogy around issues related to equity, technology, and the climate emergency through artistic practice. I founded and now run a program at NYU's Tisch School for the Arts, which is called This is Not a Drill, which will explore that.

WENDELL WALLACH: Wonderful. Those are all intriguing initiatives, and we are lucky to have you working on them.

Rumman, my question for you is similar, going to what has been your long-term involvement in AI, and particularly your role in advising corporations on AI ethics. You have been doing this for many years now. You have, of course, watched the evolution of that and at least how corporations are increasingly giving lip service to an interest in AI ethics, but there are many of us who are also concerned to what extent that is true or to what extent they are engaged in what is sometimes referred to as "ethics washing" or that they may be interested in AI ethics, but when push comes to shove, when they have to deal with anything that might even marginally affect their income, they tend to turn away.

So I wonder if you can just share some of your experience, reflections, and conclusions about where we are, at least as far as the importance or lack of attention given to AI ethics in the corporate sector.

RUMMAN CHOWDHURY: Absolutely, and I will put the general partner of my fund hat on for a second. That was actually one of the things that inspired building the fund.

Prior to being at Twitter I had started my own company on algorithmic audits, and one of the most difficult things to do was for me to explain to potential funders and VCs (venture capital) what it was that this company did. People always put things in the framework and paradigms of things they understand. The start-up shorthand would be, "You're the X for Y, like you're the Uber for cookies, or you're the Google for dogs or whatever." But when you do that, you have restricted any idea you have into something that already exists today.

But with my previous start-up and with this fund, what we are really saying is that there is a fundamental shift that needs to happen and that will happen with how companies do business. This is coming from the bottom up through increasing awareness by consumers. I have very much enjoyed seeing the evolved narrative in media, despite the average person on social media, whenever there is a new app launch or a new product, the first thing you will see is people saying: "How is my data being used? How is my privacy being protected?" These were not things people understood to say a few years ago, and now they do, and it is really wonderful.

The second is from the top down. We have talked a bit about regulation, and the importance of the public sector here is not just in procurement and how they use it but also in how these technologies will be regulated because a lot of us who do work in this space, a lot of folks are not as attuned to how scaled engineering practices may work. You do have to meet your audience halfway or maybe even more than halfway. So if we don't build these solutions in a way that is compatible with the way that technology is being built today, we are not going to incrementally move forward and we are not going to achieve this desired end state.

WENDELL WALLACH: As a follow up, what about the whitewashing charge? Do you feel that's legitimate, or do you think that really misunderstands what's going on within the corporations?

RUMMAN CHOWDHURY: I think it varies a lot from corporation to corporation. I will tell you a shorthand that I used to have in my time at Accenture. I realized that often CEOs and companies just wanted me to go speak to them so they could have interesting intellectual conversations about trolley problems and paper clip maximizers, but they were not always interested in doing business with us, and I very quickly developed a nose for who was actually interested in what we were offering and who just wanted to have a cool conversation with someone doing AI and AI ethics, which is probably a very trendy talking point. Usually it relied on what were the questions they were asking me and were they fundamental questions that demonstrated concern, whether for people being harmed or for upcoming regulation, or again did they talk very high level about advanced general intelligence and what happens if robots attack us. I'm not here for those conversations. Those are fun conversations to have, but these are not necessarily the conversations that will move forward the current state of AI.

So, a mixed bag. There are definitely multiple kinds of people. I do sometimes lament the fact that algorithmic ethics and AI ethics has become such a popular talking point because there are so many people who have differences in going back on incentives, very different incentives, so what they want to move forward and what they think this field means, but I do think that there is actually quite a positive aspect of things where there are plenty of people working at companies today—I work at Twitter. I will definitely say that there are plenty of engineers who are very deeply interested in this work, and what they really want are teams like mine to give them the right resources so they can action on it in a very tangible way.

ANJA KASPERSEN: Thank you so much to Rumman and Mona. It has been a wonderful and insightful conversation. Thank you for sharing your time, your expertise, and your insights. I hope our listeners enjoyed it as much as Wendell and I have. Also, go to AIProcurement.org and continue reflecting on the tensions and narrative traps outlined by Rumman and Mona as we embed AI in the public sector, but most importantly respond to their call for action to cultivate talent and build AI ethically.

A special thanks also to my co-host, Wendell Wallach, and the team at Carnegie Council for Ethics in International Affairs, hosting and producing this podcast. My name is Anja Kaspersen, and I hope I earned the privilege of your time. Thank you to everyone.

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

JAN 23, 2024 Podcast

When Science Meets Power, with Geoff Mulgan

In this special episode, Senior Fellow Anja Kaspersen speaks with University College London's Professor Geoff Mulgan on the trends shaping technology's impact on society.