Responsible AI & the COVID-19 Pandemic, with Rumman Chowdhury

March 26, 2020

CREDIT: Piqsels (CC)

ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.

This week’s podcast is with Dr. Rumman Chowdhury, global lead for responsible AI at Accenture.

Although Rumman and I scheduled this talk before the coronavirus had really hit the United States, we ended up discussing several aspects of the pandemic and how it relates to artificial intelligence. We touched on surveillance, supply chains, pseudoscience, and how we're all coping with social distancing. Rumman’s focus is on intersection of humanity and AI, so it's great to have her perspective at this time.

For a lot more on AI, including recent talks with Berkley's Stuart Russell and IBM's Francesca Rossi, you can go to carnegiecouncil.org.

For now, here's my talk with Rumman Chowdhury.

Rumman, thank you so much for speaking today. I'm glad that we were able to do this.

RUMMAN CHOWDHURY: I am too.

ALEX WOODSON: Before we talk about the coronavirus pandemic and how artificial intelligence (AI) is affecting that or how it's affecting AI and a few other things, I thought we would start with talking a little about your role at Accenture. Your title is global lead for responsible AI. What do you do exactly in this role?  Maybe you could discuss what "responsible AI" means.

RUMMAN CHOWDHURY: Of course. My role is actually quite a unique one within organizations. I was hired three years ago to create a client-facing practice on responsible AI, so I sit on our global AI leadership team. My job is to create ethical solutions for explainable, transparent, and fair AI for our clients. Part of my job is thought leadership and thinking through what are the next big challenges and how do we address them.

Fundamentally what I love about my job is that it's grounded in reality. Everything I say and everything I build has to be something that is of use to an organization or a company that is genuinely trying to build responsible AI solutions.

ALEX WOODSON: I have watched a few clips of you speaking and read a few things that you have written. One thing that came up a couple of times that I think explains your role—and you have said this as well—is the phrase, "Brakes help a car go faster."  I was hoping you could explain what that means and how that relates to AI.

RUMMAN CHOWDHURY: I wish I could take credit for that. That's actually something I got from Zia Khan of the Rockefeller Foundation. I thought it was one of the most insightful things I had heard. The common phrase goes around when we talk about regulation, when people are afraid that regulation will stifle innovation. Built properly, standards, guidelines, and regulations help you define a safe space to build. It helps define parameters.

If we use the analogy of a car, if your car could only accelerate and just keep moving forward, you would actually drive at an incredibly slow speed because you would be deathly afraid of something coming up in the road, something unexpected, and your car being unable to stop in time. With brakes, that is actually why we go on the highway and maybe drive a little bit faster than we should because we are safe and secure, knowing that if I have to stop, I can stop, and if I have to slow down or meet the appropriate standards of speed, then I can do that. It's a different and I think very insightful way of thinking about how we utilize and work symbiotically with standards, guidelines, and regulations, rather than seeing it as a blocker or something that is stifling or stopping progress.

ALEX WOODSON: Moving into some discussion of the COVID-19 pandemic, we were talking before the podcast about how everyone is trying to adjust to this new reality of working from home, staying in, and social distancing. I think we're probably relatively early in the days of social distancing and dealing with this pandemic in the United States. So far what have you noticed as far as links between AI and the pandemic, and how are things turning out as you see them?

RUMMAN CHOWDHURY: This is an interesting case where everybody wants to help, and that's understandable. It helps us feel like we have some degree of control over what's going on. I find it interesting that there are these broad approaches and different people coming together, so it has been positive in the sense that we have all these different communities of people coming together to try to tackle this problem. The White House recently had a call to action, which translated into a massive data set that was put on Kaggle and 10 different Kaggle competitions.

I have a small team as well as other folks at Accenture. We created multiple teams to address a few of those. My team is looking at the ethical and social considerations that exist in the literature.

I think it's a pretty smart move. They compiled all of the literature that exists on COVID-19, dumped it into one repository, and said: "Here are 10 questions. Help us parse through this data and information and pull out the best answers to these questions."  I think that's a wonderful call to action.

One thing we're seeing from a client perspective is that folks are starting to think about the supply chain. It's quite interesting. We were talking before the podcast that I think COVID-19 is a race against time to test the limits of our supply chain. Maybe one thing that will come out of it—hopefully, if everything is fine or as fine as it can be—is a critical reevaluation of where we might be able to improve transparency, reduce inefficiencies, but also in that some of the things we thought were AI very much relied on people.

So, thinking of all these apps that we use for delivery, we think of these as technology applications, but they fundamentally rely on human beings to supply you with food or groceries or delivering your packages. It's actually a reminder at a meta-level that when we talk about artificial intelligence there is often a discussion of the AI versus the human and placing the AI in a position of power, when actually what we're frankly seeing—in our current state of narrow AI, at least—it completely falls apart without human beings.

ALEX WOODSON: Is that something that surprised you, or is that just something that you and others have learned about and are taking into consideration now?

RUMMAN CHOWDHURY: I don't know if it's surprising. There are a lot of great books written on this: Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary Gray and Siddharth Suri is a great one, for example. There have been a lot of folks who focus on the human component of AI.

I don't know if it's necessarily surprising as it is a much-needed reframing of the discussion on AI. It's often viewed as a dichotomy—human versus AI, or human or AI—and I think a lot of us have been thinking a lot about how do we create truly human-centric AI. Accenture chief technology & innovation officer Paul Daugherty along with Jim Wilson, who is also at Accenture, wrote a book called Human + Machine: Reimagining Work in the Age of AI, and I think they have a really good perspective on how do we think about integrating this technology into the fabric of humanity.

Bringing it back to COVID-19 and the supply chain, we can build all the fancy AI recommendation systems, etc., in the world, but without the human, physical infrastructure it really does fall apart. We have not and maybe cannot automate that away.

ALEX WOODSON: Is there anything that you see in the future of AI, maybe not the far-distant future but the near future of the technology, that you wish you had right now that worked better and that could be helping us during the pandemic that we don't have yet?

RUMMAN CHOWDHURY: Besides the obvious stuff like a vaccine and a cure?  That's a really good one. I think there have been a lot of folks tackling issues of misinformation and disinformation. It is certainly not a solved space.

If I were to put anything in a wish list, that would be the one thing I would wish for. I think we're seeing the rise of armchair epidemiologists, and some of these takes are genuinely harmful to people because people are getting wrong information and taking actions in good faith that might not be helpful. I suppose if there is one thing I would wish for it would be a good way to tackle and target the pseudoscientific information that's floating around at the moment.

ALEX WOODSON: I saw on a talk from a month or two ago that you're working on misinformation during the 2020 election, some kind of algorithm or process to help identify that. Is that correct?

RUMMAN CHOWDHURY: What I'm working on is a project actually to identify pseudoscience on social media and on the Internet. All of this predates COVID-19.

Last year I had a conversation with Angela Saini. She is the author of the books Inferior: How Science Got Women Wrong-and the New Research That's Rewriting the Story and Superior: The Return of Race Science. She was working on a project to tackle how pseudoscience spreads on the Internet, and I thought it was very interesting.

I think there are a lot of great folks who work on election misinformation and political misinformation. Personally the concept of tackling pseudoscience was compelling and interesting to me. It's almost a different problem space from, let's say, election misinformation, but it is no less important if you think about—again, this predated the current pandemic—the issues going on right now where people are not getting the right information or even other situations like the anti-vax movement and how harmful that is for children and adults if they're not getting vaccinated because they have incorrect and pseudoscientific information.

Pseudoscience is also an interesting topic because it's in part false. This is where I find the distinction between misinformation and disinformation to be very interesting. Different folks define this very differently, but I define misinformation to be misleading information, that is, like a distraction, versus disinformation, which is an actual lie. The reason it's different is that if something is quite literally, factually incorrect, that can be refuted. Someone says, "Grass is blue," and you go and get some grass, and you're like: "Nope, it's green. Here is some grass."  It can be refuted.

If you're talking about distractions, that can often be a conversation about values. This is where a lot of pseudoscience lies, and a lot of this in relationship to AI ethics as well. A lot of the nitty-gritty details lie in the uncertainty.

Yes, science, especially medical science, is not 100 percent bulletproof. There are people who have adverse reactions to vaccines, for example. You would have to have an understanding of how systemic analysis works to understand that just because a few people had a bad reaction does not mean that all vaccines cause autism, for example. That's the part that I find interesting because there is an educational component to it. There is a part to this which is about teaching people. What we're working on specifically is comparing and contrasting different approaches to addressing pseudoscientific misinformation and disinformation.

Broadly speaking, there are two models. There is the public health model and there is an epidemiological model. What we want to do is compare and contrast different approaches to see what has been successful and what has not been successful in what situations, so that people can actually start being more proactive instead of reactive in thinking through how they're going to approach pseudoscience.

The public health model thinks more about the infrastructure and institutions. For example, maybe there's an individual whose job is to scientifically authenticate the veracity of a scientific claim. You might create a whole content-moderation style network of people. That is an infrastructural answer, and that is a public health kind of answer.

An epidemiological answer is sometimes more responding to a crisis where you want to slow the spread and you want to be reactionary. One example would be WhatsApp limiting the number of people who can be in a group chat. That is an epidemiological response. So we're talking through different approaches we have seen, what has worked and what hasn't in what situations so that people moving forward can think intelligently about the approaches that have been taken and what they can use in their situation.

ALEX WOODSON: That sounds like a very important project, especially right now.

Another thing that I wanted to speak about relating to the pandemic is surveillance. I spent a lot of time today reading about surveillance and the coronavirus in China, and not just China: there's a phone app in Korea that can track quarantined people; thermal scanners in China. A lot of different issues come up. Is there anything specifically that has caught your eye as far as AI surveillance and the pandemic?

RUMMAN CHOWDHURY: There are two parts to this. I don't want to sound cavalier as to the severity of the situation, of course, because it is genuinely frightening, but one of the things that I hope lodges in people's minds out of this is, at a very basic level, how difficult it is to collect and analyze data. A lot of discussion right now, for example, is around prevalence. We actually don't have good numbers of how many people are being infected or are infected because we don't have a good way of testing, so we're using these rules of thumb like temperature. There are a million reasons why someone's temperature could be raised. It doesn't necessarily have to be COVID-19.

One thing I hope at a meta level is that people start to grasp what we say when we say: "A measurement of crime isn't actually the measurement of true crime. It's the measurement of the amount of crime that was picked up—the number of people who got arrested, the number of calls that were put in—not actually a measurement of true crime."  It's an analog at a very meta level. Again, I don't want to sound cavalier about the literal crisis that we're in.

Second, thinking about surveillance, I think this has brought a lot of the privacy narrative to a head. Sometimes I forget that I live in a bit of a responsible AI bubble. I feel like "everybody" is talking about privacy, and everyone's talking about ethics. In reality the average citizen of the world is not thinking about whether or not their geolocation data is being used. But now I hope they are.

This is where this comes to a head, where we need to deeply think about how we create agile response systems for moments of crisis that don't lead to a decline in our personal rights and liberties. There are a lot of different articles out there—the Electronic Frontier Foundation has chimed in, there was an Anthony Appenz [phonetic] article yesterday and a lot of folks are saying: "We get that things need to be tracked, but one, this is a bit of a slippery slope, and then how do we come back from it?"  Balancing that, of course, is the very obvious immediate need of tracing and tracking where this disease is going.

Add to that convoluted mix the fact that just because you're measuring everyone's temperature or tracking their geolocation data on their phone doesn't mean you're getting accurate information. If I know my phone is being tracked, maybe I'll just leave it at home when I go visit my friends and family. People are smart. They learn to game the system.

The most compelling thing to think about right now is how we can balance privacy and security. This is not a new conversation. This has been the conversation since September 11 in modern times and even before.

ALEX WOODSON: I wanted to ask about that as well. Do you find yourself thinking about how the government reacted to 9/11 and to the financial crisis in terms of how we can do better in terms of reacting to this pandemic?

RUMMAN CHOWDHURY: It's hard to say. A lot of folks are making the parallel with the fiscal crisis of 2007–2009. I think the difference here is that this current pandemic is attacking the entire global supply chain versus a very powerful and impactful industry vertical, which was banking. Obviously a lot of folks were impacted by the banking crisis, but at the same time you still were able to go to your job, you were still able to pick up groceries, and take your kids to school. Now it's like the entire global supply chain is at a halt.

I think the thing that is different here, even compared to 9/11 similarly, is that it's like the whole world is holding its breath, and we're not quite sure how long we have to hold our breath. That's the scary part for a lot of folks, whereas with September 11 and even the banking crisis there was more of a sense of: "Okay, this thing happened. What do we do to build?" I think most people are very unclear as to what is going to happen next, so they don't have that certainty of saying: "Okay, this really terrible thing happened, but it happened, and it's over."  We don't even know when this is over quite yet.

ALEX WOODSON: As you said, most people don't think about their data being private. They don't think about their geolocation being tracked and all that. But people are very scared right now. They're scared that they're going to get sick, their loved ones are going to get sick, and they might say: "Sure, let's increase surveillance. Let's let the government take our temperature every time we walk into a grocery store, or this or that."  How would you respond when people say that it's necessary right now to let the government into our lives and let them control us a bit more than maybe they did a month ago?

RUMMAN CHOWDHURY: That's a great question, and maybe this links back to the pseudoscience question that this group with Angela and I are trying to tackle.

Ultimately this idea of increased surveillance—temperature tracking, geolocation tracking—is an epidemiological response. It is not a long-term infrastructural response. One benefit of thinking about it as an epidemiological response is to say, "Okay, this is a temporary measure, and here's how long we're going to do this," but then ultimately we need to build a robust infrastructural response that is in consideration of people's privacy and civil rights concerns. I think there's a short-term response and a long-term response. Maybe in the short term, sure, you're more okay with this data being picked up or sold or however it's being used, but are you okay with it long-term if you genuinely think this through?

I think the issue is that that conversation with the public is still very nascent. People had just started to understand how their data is being used. I think The Markup had a good piece a few weeks ago about insurance-pricing algorithms and thinking through the data it picks up and how it may negatively impact you. These are just starting to be conversations.

While people may today say, "We need it, it's a public response," it is not something that is easy to have a clear answer to. I think one thing we do need to have a clear answer to is, what is our long-term response so that we're not consistently being reactionary?

One thing I am reading is that this may just be the first of a series of things like this. This may happen every year. We can't do this every year. We can't have our global supply chain fall apart. We can't have people go without money and children going without schooling. We can't do this all the time.

ALEX WOODSON: To talk about something from a different perspective, I was watching a talk you gave a few months ago. You talked about algorithmic determinism and how in a sense we're being nudged to be like everyone else. Have you thought about that in terms of the amount of time that everyone is spending watching Netflix or just spending a lot more time with media than they were a month ago?  Do you worry about this algorithmic determinism even more so now?

RUMMAN CHOWDHURY: I just genuinely worry about the homogenization of society.

To explain the concept, algorithmic determination specifically thinks about recommendation systems and how companies essentially build profiles around who they think you are. It's based on limited data that's collected to make a profile, and what we call "personalization" is not actually personal; it is customization to a user group that has similar characteristics to you. So I have been typed by whatever media company as being of a particular geographic locale and estimate income level, gender, and age. They will definitely have my shopping preferences. Built on all of that, they will take pools of people who are quite similar to me and then say, "These people like these things, so she'll probably like them too."

So algorithmic determinism talks about how it becomes increasingly hard to illustrate that we are nuanced human beings. I followed that up with a piece of research with a data science researcher on my team, Bogdana Rakova. We wrote a paper on something we called "barriers to exit," where we actually quantify what I would call a "bandwidth" around your recommendation, which is essentially a buffer, where you have to prove to the algorithm that you are a different human being.

Then, thinking through human agency, our right to free will essentially, and our ability to take action and make decisions in our lives, I think it's in part philosophical, but it's a very salient—you brought up a lot of the terminology on "nudging"—case to be made that people are nudged to be homogenized based on a very flat profile of who companies think they are based on very limited data, and it doesn't appreciate the nuances not only of the human condition but our ability to have free will and make decisions in our lives that may be radically different from who we were six months ago.

ALEX WOODSON: Is there a way that AI can be modified to make people more open to different TV shows, different movies, different music, different things to enrich their experience? Is that something that you're working on, or is this more just looking at the AI landscape and saying that these are things we need to look out for?

RUMMAN CHOWDHURY: That's a great question because I would hate to be the person who points out problems but doesn't give answers.

One way that it is addressed in, let's say, recommendation systems is that they introduce jitter, which is literally random noise. But random noise doesn't make sense. I think we have all watched Netflix and companies like that go through this as they evolved their recommendation algorithms. If you're interested in data science, the Netflix data science blog is really good, and you learn a lot about how they're thinking through their problems.

One thing that recommendation systems do is often they use jitter, which is random noise. This is one where all of a sudden you're given something completely out of left field, and you're like, I would never buy this. Why are you recommending this to me? I think there is an interesting case to be made—not that I have a formal solution for this—for at least being able to identify as I mentioned the barrier to exit: What is the threshold that this model has before something is viewed as a fundamental shift and appreciating that in the recommendations that are given to an individual.

For example, one thing we might want to think of is a short-term, dynamic shift in your recommendations if you are consistently—however you want to define "consistently" —showing a preference that is slightly different from what it might expect of you. It may be like factoring in that people don't abruptly change their mind; they change their mind over time. And how can we think of or model or address the process of gradual shifts in preference? I think preference changes are often viewed as being very binary, when they're actually a longer-term process. It's certainly not a solved problem, but what I'm glad to see is that people are thinking about it and not just my team.

ALEX WOODSON: I think that could be a very important development as we all spend a lot of time with media over the next weeks to months. I think we are all going to be looking for new things to do with our time.

RUMMAN CHOWDHURY: This is a very good example. We have had this exogenous shock of a crisis, and of course an algorithm does not know that this is a crisis. It's not just that we're watching more media. What we watch is a function of how we feel. Almost everyone's reaction has been stress of some sort, whether you are scared or angry or whatever it is, and everybody responds differently. Some people are immersing themselves in the news and constantly want to watch things to educate themselves. Other people are watching all the dystopian movies about pandemics. And some people, like myself, have started to watch children's cartoons because I just want to watch a happier world at the moment.

That's such a great use case to model when you think about recommendations: What are people thinking about right now, and what is at the top of their mind that may not reflected in the kind of data you traditionally pick up?

ALEX WOODSON: My wife and I watched Step Brothers over the weekend, which was very good. She actually said that when she gets a desire to watch the news, she puts on an episode of Portlandia.

RUMMAN CHOWDHURY: I have been watching Disney+ nonstop.

ALEX WOODSON: I still probably follow the news too closely, but maybe that'll change.

Last question: What are you going to be working on in the next couple of weeks, the next month? Is your professional life taken up by the pandemic, or are things progressing as they normally would? I think I saw a tweet that you work from home normally.

RUMMAN CHOWDHURY: Yes, I do. It has been interesting to see people adjusting to work from home. As a consultant one thing I will say is I genuinely appreciate how much thought Accenture has put into giving us an infrastructure such that we just hop on whatever platform we use—we use Microsoft Teams at work; it is just seamless to me to do this—and how different it is for a lot of folks and how frustrating my job would be if it would take three hours to upload a PowerPoint because I had to dial in through an antiquated virtual private network system with servers that are going to be overloaded, etc. That being said, I think I tweeted that about two weeks ago, and we were talking about how there has been this week-by-week change.

My daily life with work has shifted, of course. A lot of us are thinking of our COVID-19 response, in particular about how can we provide good solutions for folks but do it in a way that preserves our rights, our privacy, and our data, and doesn't create a surveillance infrastructure long-term. I think it's very tempting—it's also human nature, frankly—to want to think that there is this benevolent body that will just take care of us, especially in moments of crisis. These are things we want to think about factually. We are our own heroes. We create our own solutions. I really want to be thoughtful about that as we start thinking long-term around how we resolve these problems.

There is some research that my team and I have been doing along with The Partnership on AI and Spotify Labs thinking through the organizational shift toward tech ethics. I think in times like this, work that really focuses on long-term infrastructural change becomes really important and we get this sense of urgency around it.

It's quite an interesting study where we thought about the adoption of ethics in AI as a shift in your organizational culture, in your organizational structure. This is more than just implementing a technology and a few guidelines for a technology or some explainability software. This is actually about the structure of your company: How do you incentivize people to do their jobs?

Interestingly, there are parallels to the 2008 financial crisis. I have been nerding out about risk management lately and have learned a lot about the fiscal crisis. Obviously, the narrative in the public, which was true, was about junk bond mortgages and subprime lending, but also there was this infrastructural component of organizational ethics and how people did or didn't do the right thing because of how they were compensated or whether or not their company supported the culture.

So we did 25 interviews of individuals who have jobs kind of like myself, where they have to deliver ethical AI solutions within their company or for their clients, so not folks in research essentially. We talked to them about their pain points in their organizations. We highlighted four main topics. One is, when and how do we act? Increasingly right now people are seeing us being very responsive and would love to be more proactive and anticipatory.

The second is, how do we measure success? I'm very fortunate to be in a role like mine. Very few people are in a job where they are doing this as their full-time remit. For a lot of folks, it's a side-of-desk project, it's a passion project, and sometimes it comes at the expense of their performance indicators, which may be around revenue or the number of projects pushed out. Instead we are thinking through how we can develop key performance indicators to support ethical use.

The third is about internal infrastructure, so thinking about leadership and how leadership can position ethical use of AI as something important to the company and how much that matters, again thinking of times like this, when people are looking to leadership to give strong and clear guidance.

Fourth is how we resolve tension. Inevitably when we talk about ethics we can run into situations where there are disagreements. How do we figure out what the way forward is, especially from a business setting, in a way that is not just bottom line over everything else? That is the stereotype of what business does.

I will tell you, having worked with a lot of legal folks, it's actually not what business aspires to do. I think a lot of folks in different corporations want to do the right thing and would be willing to say no to unethical projects. They just need to be supported. So we're thinking about, What are the enablers to drive change in responsible AI use?

Thinking through our conversation today, it's such an important thing to be thinking about as we try to create structural, proactive, and anticipatory solutions for thinking about what the next impactful situation may be or the next crisis that might come up.

ALEX WOODSON: Thank you so much for talking. I'm glad we were able to do this.

RUMMAN CHOWDHURY: Thank you for having me.

blog comments powered by Disqus

Read MoreRead Less