The Societal Limits of AI Ethics

This event took place on Monday, April 19, 2021

ARTHUR HOLLAND MICHEL: Hello, everybody. My name is Arthur Holland Michel, and I am a senior fellow here at the Carnegie Council. It is wonderful to see you all here. I am very excited for the discussion that we are going to be having today, a discussion that promises to be both lively and vivid but also in certain ways uncomfortable too.

This is the quick overview of what we have in store for you. We have a discussion between myself and three panelists lined up. We will run that for about 55 minutes or so, and then of course we are very, very eager to hear from you. At any point in the discussion, but certainly when we do open it up, we ask that you submit your questions in the chat function.

Just to give you a little bit of context, this is part of a series here at the Carnegie Council where we take a bit of a, shall we say, contrarian position on the topic of artificial intelligence (AI) ethics to think about what are the inherent limits, if you will, of AI ethics, not purely for the fun of taking down AI ethics and being rebels in that regard, but to make sure that the discussion is grounded in reality and that the solutions that we all come to together reflect that reality. In the first session of the series we looked at the very real, tricky, and challenging technical limits for achieving ethical AI, and today we are going to look at societal limits that may get in the way.

Just to give you the lay of the land, in the last few years there has been a very constructive, widespread conversation about the principles that AI should be held to. There is a whole range of these principles, many of which are codified in declarations or shared agreements between institutions and increasingly countries. I would just draw your attention to three that I think are particularly relevant to our discussion today, and that is the notion of fairness, that these systems should be fair, transparency, and accountability. Hopefully over the course of the discussion you will see why it is that those so central principles may be easier said than accomplished.

For that I am delighted to say that we have three absolute legends of the field who were gracious enough to join us for this event. I couldn't be more excited to have with me today Meredith Broussard, Safiya Umoja Noble, and Karen Hao. It is a true honor.

Meredith is an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest Technology; Safiya is an associate professor at the University of California, Los Angeles, in the departments of Information Studies and African American Studies; and Karen Hao is the senior AI reporter at the MIT Technology Review.

Hi. How are you? Good to see you all.

I am going to start with Meredith. Meredith is the author of an absolutely wonderful book called Artificial Unintelligence: How Computers Misunderstand the World. If you don't have it on your bookshelf, I highly encourage you to do something about that as soon as this discussion ends.

In one of my favorite chapters, Chapter 6, "People Problems," you have this wonderful line: "Computer systems are proxies for the people who made them." I want you to start us off by telling us about the people who made them and in particular the people who are at the heart of the AI "revolution," if you will. Give us a bit of a sense of the cultural fabric that we are talking about that has been part of this community since the very beginning.

MEREDITH BROUSSARD: Arthur, thank you for that wonderful introduction. It is such an honor to be here and especially to be in conversation with Safiya and Karen.

In terms of people who made our ideas about AI, one of the things I write about in the book is an idea I call "technochauvinism," the idea that technology or technological solutions are superior to other solutions. This is an idea that came from my experience as a computer scientist, where the dominant ethos was the idea that we inside computer science and inside math were somehow smarter than other people or better than other people, and it's simply not true. But there is this kind of snobbery inside the field. There is also a lot of racism and sexism and a feeling that nobody needs to examine their beliefs or examine the structural racism and sexism of the field because math and its descendant, computer science, are so superior.

This is the attitude that I observed as an insider in the field. I left computer science and became a journalist, and then I came back to it as a data journalist. I wrote the book Artificial Unintelligence out of my experience as a data journalist.

One of the things I did for the book was a deep dive into our ideas about technology and society. I did a deep dive into the history of technochauvinism. What I discovered was that our ideas about the future of technology in society come from a very small and homogeneous group of people who are mostly white, Ivy League-educated men who trained as mathematicians. There is nothing wrong with being an Ivy League-educated white male mathematician. Some of my best friends are like that.

But the problem is that people embed their own biases in technology, and when you have a group of people who are largely the same, that group of people has the same blind spots. That is what leads to dangerous, harmful, and racist situations like Safiya covers in her magnificent book Algorithms of Oppression: How Search Engines Reinforce Racism and that is why you have the fact that Google Search is racist because there is a racism, sexism, and ableism problem inside computer science that was inherited from mathematics. It goes back hundreds of years, and the field has never reckoned with it.

Also, the people who are in the field right now don't think that it's important. We know differently now. We know that this is crucially important. We know that data rights is the civil rights issue of our time, and we know that we need to do something about it.

ARTHUR HOLLAND MICHEL: In the book you also talk about the deep strain of libertarianism running through this community. Why do you think that is so closely associated with these communities, and how perhaps does that manifest itself in their worldview and how they create these systems?

MEREDITH BROUSSARD: That's an interesting question. Not many people ask me about that, so I am pretty delighted to talk about it.

Libertarianism is the default in Silicon Valley, and the reason for that is interesting. The early pioneers of the Internet—the Stewart Brands and the Marvin Minskys of the world, were hippies. They were trustafarians and prep school kids who were living through the 1960s and went to live on communes and thought: Oh, hey. Communes are great. We are going to live beyond the reach of the government. We don't need government giving us rules, or whatever, all the hippie stuff.

But the thing is that the communes were actually good places to be a white male, and if you were a female in the communes you were pretty much barefoot, pregnant, and in the kitchen. There were very few people of color in the communes, and the communes all fell apart at the end of the 1960s.

But the people who lived there said: "All right. Well, the communes failed, but there is this whole new world that is emerging called cyberspace. So we're going to transport our ideas about communalism and our anti-government rhetoric onto this new sphere."

Stewart Brand is an important figure in this because he started the Whole Earth Catalog, and then he also started The WELL, which was the very first place that we had a real online community. It still exists. It is well past its heyday. But when you look hard, you can see the way that these 1960s hippie ideas got transported into ideas about cyberspace. John Perry Barlow, for example, who wrote "A Declaration of the Independence of Cyberspace" at Davos one year, former lyricist for the Grateful Dead, head of the Electronic Frontier Foundation, and heavily into this Whole Earth scene.

So libertarianism is what the hippie ethos morphed into. It became the default in Silicon Valley, and this gave rise to the idea that people making technology didn't have to pay attention to rules and didn't have to pay attention to government. Coupled with the economic and racial privilege that I talked about earlier, it gave rise to a toxic brew that has guided the development of the technological systems that are now so influential over our lives, and it is a really bad scene.

ARTHUR HOLLAND MICHEL: I think that was a fantastic scene setting of what we are dealing with here. Very quickly, would you say that that ethos lives on in the community?

MEREDITH BROUSSARD: Very much so. Stewart Brand is still an influential person. Whole Earth is still an influential community and influential inspiration. All of your major tech titans subscribe to the same kind of magical thinking.

Then you have the weird cult-y stuff like the Singularity or the self-driving car—who is the self-driving car guy who started his own cult? Oh, Anthony Levandowski, who thinks he is a god. So there is all kinds of weird stuff happening, and we should not be giving much credence to it.

We should also pay attention to what these folks are doing. Marvin Minsky, for example, is heavily implicated in the Jeffrey Epstein scandal. You have to look at these folks, look at what they are qualified to talk about, look at what they are not qualified to talk about, and you have to be judicious. The more whacked-out imaginary stuff about "Let's all go live on Mars" or "Let's go seastead" or "Let's preserve our brains and freeze them so that we can upload them into a future consciousness," is nutty, and we should not pay attention to it.

That's not what AI is. AI is math. It's an actual technology. It exists. It's not the Hollywood stuff. It's not The Terminator. It's just math, and it is being used to oppress people and violate people's civil rights, and we need to change that.

ARTHUR HOLLAND MICHEL: I am going to turn from that phenomenal explanation of the cultural underpinnings that we are dealing with here to Safiya. If you have reserved a space on your shelves for Artificial Unintelligence, I urge you to reserve a space right next to it for Algorithms of Oppression, Safiya's phenomenal book on bias in the technologies that we use every day.

My challenge to you, Safiya, is to explain to us how this cultural makeup, if you will, that Meredith has just described to us manifests itself in harmful ways in the technology that we interact with every day.

SAFIYA UMOJA NOBLE: Sure. Thank you so much for the invitation to be here. 

I love following you, Meredith, because I have not heard you give that history of Silicon Valley before. It's true. People don't ask about that very much, but of course once I sit and listen to you describe it and I think about the rationalists and the technocratic future that so many of these guys want, it is no wonder that then we can turn around and use everyday technologies that have become ubiquitous in the case of—a lot of my work has been on Google and Google Search and Google products because they have come to replace and displace so many other kinds of knowledge and information, organizations, institutions, and ways of thinking about how to find the truth, whatever the truth is, in this world on a variety of different topics.

What I have found for years watching Google in particular, but I think we could extend this into some social media companies, which implement differently, but we could say broadly that there is a total lack of regard for women, for people of color, and for poorer people. There is a set of logics about who is important and who is disposable in these systems. One of the ways you see this is when you do searches for different kinds of identities and different kinds of people you see, for example, who is for sale to the pornography industry, who is hypersexualized, and who becomes like a commodity object in these systems. Over and over again we see that women and girls for the most part are profoundly used to bring in an incredible amount of profits for these systems and for these companies by misrepresenting us, and of course the same is true for people of color.

So when I say that we get misrepresented this goes back—I am looking at my book here and thinking about how it was like a big book of tickets for Google. There were so many pages of things that were wrong, and the software engineers picked it up and were like, "Fix, fix, fix." This, of course, is one of the things that happens for all of us who write about the tech sector critically is that they open up the ticket and try to fix it and pretend like it didn't happen.

In the case of my early work it was about looking at what happens when you look for black girls, Asian girls, and Latina girls, and how pornography or hypersexualized materials and websites were the first thing that came up along with their corollary, advertisements. It is interesting because that is such a banal type of AI. That is not what people think of when they think of AI. They don't think of search engines. The think of HAL or they think of Watson or they think of some Hollywood fantasy informed by Star Trek or Star Wars, these futures that again are imagined by a very narrow band of people in the world about what the future looks like.

Weirdly me, Karen, and Meredith don't really show up in that future very often. That tells us a lot about how much is lost to not only the culture of knowledge, the culture of information, and what does it mean when large-scale advertising platforms take over the landscape and that are so easily gamed on one hand, but more importantly where people with the most money or industries with the most money and clout are able to control certain kinds of narratives. We see this in the realm of politics. It is not just in the realm of identity. The most powerful political action committees are able to control the narrative about political candidates.

I am so grateful to our colleagues who do deep research on social media. I think that is incredibly important. We must take note of what monopolies in the tech space, monopolies in social media companies, and monopolies in search industries do and how important it is to address that.

The difference between social media and something like a basic AI like search is that people are much more aware of the subjectivities of social media. They know that they are following people. They know other people are following them. They know the hashtags they follow and how they use it. It is much more apparent that you are in a particular bubble of your community, even communities you don't want to be a part of anymore that pull you into them, like your friends from high school or junior high school.

But in the case of AI like search we were talking about people using that like fact checker, like a truth teller, like an infinite knowledge portal. People have interesting imaginations about what search is. That it is a giant multinational advertising platform is not the first thing that comes to mind. This of course is one of the reasons why what we have seen over and over, those of us who are, let's say, adversarial subjects in these spaces, people who are part of communities that are misrepresented or are harmed by these technologies, have been the scholars and journalists who have done the hard work of making these things legible.

Meredith and Karen know. Ten years ago if we tried to have this conversation, people would think we were from Mars. They would completely deny this. I know because ten years ago when I was writing a dissertation, nobody wanted to hear that math could be racist.

Of course we are not just talking about numbers encoded. We are talking about where math meets society is racist. That is what we are talking about. These mathematical formulations are never an abstraction. They are always social. They are socio-technical systems. This is why we have to have these conversations, and I think those of us who come from communities where there has been incredible harm understand that the deployment of these technologies is profoundly unequal, uneven, and work in various oppressive ways.

ARTHUR HOLLAND MICHEL: I do want to get to Karen in just a moment, but a very quick follow up for you, Safiya. Your book received a lot of very well deserved attention as I understand it. Some of the individual cases that you discovered and reported on in the book were subsequently "addressed," if you will. But in the months since I have read the book—and I am sure you have done the same—every so often I get on Google to see if I can find more examples of problematic AI biases, and it is not difficult. Within 30 seconds you can find an example, and I encourage everybody here present today to do exactly that because it is absolutely chilling.

But my question to you is: Google knows about your book. People know that these things happen. Why is it that they were able to solve those individual cases that you pointed to but that it is still so easy to find instances of unfairness, if you will?

SAFIYA UMOJA NOBLE: For the most part what happens in search is that troubling results that maybe capture the media's attention or that go viral in other social media systems just get down-ranked for the most part. They just get shoved way down so that they are not entirely visible. But the underlying logics of how this search technology works doesn't change. It is just a lot of triaging. It's a lot like whack-a-mole: as the problems pop up, they are trying to tamp them down.

This has brought Google a lot of criticism because in the early years—and still to some degree this happens—Google would argue that it's the user's fault: "You used the wrong words to look for what you're looking for," which is ludicrous. So, again trying to disavow responsibility for its own product and its own technologies, and that is untenable. You are responsible for the products that you make and how they work in the world.

They received a lot of criticism when very famous, let's say, a horribly racist image of former First Lady Michelle Obama gets put into Google Image Search—it is an extremely racist interrogatory, it has been Photoshopped, it is horrific, I don't even want to describe it—and the White House responds and says: "That's unacceptable. We need that image taken down."

A lot of people are watching and arguing that if Google does that, then it is catering to certain kinds of people in society and that other people don't have those kinds of privileges and powers, so it has backed itself into a corner by saying that anything that shows up in its product is really the result of what the public has done. It is not their responsibility. But then when things that are a national security issue or things that are offensive to people with power, then Google has to address it.

What we also know is that outside of the United States, Google must comply—as must Facebook, Amazon, eBay, and every major platform—with the laws in the countries where they do business. In Germany and France, for example, it is illegal to traffic in neo-Nazi or Nazi paraphernalia, imagery, or ideas. Anti-Semitism is illegal. So Google is able to go in—as is Facebook and many other companies—and spend an incredible amount of money and resources, mostly human beings, including some AI, to take down that content and make sure that it doesn't move around freely because it is against the law, and they will face millions of dollars in fines.

This tells us that these problems cannot entirely be solved by AI because human beings must adjudicate and make these decisions, and this is where the whole work around content moderation is important. It also tells us that the free-speech logics exported from the United States and from Silicon Valley don't work everywhere in the world, and so the companies must invent different kinds of business practices to manage that.

It also tells us that these platforms are not just free speech zones, the way in which they like to pretend that they are, and part of the reason they want to say they are total free speech zones and do not interfere algorithmically or with moderators to any degree is so that they are not responsible as publishers, because then they actually have a whole set of other laws that govern them by the Federal Communications Commission.

These are part of the landscape of the challenges, and this is why they can't fix the underlying logic of the technology because it's an advertising portal. It is meant to be gamed. It is meant to be optimized for maximum profit. It is not designed to be a fairness machine that is an arbiter of culture, ideas, and knowledge in our society. This is one of the reasons why they would have to toss out most of it and start over to do that, and why do that because we have libraries all over the world who are already doing that important work of curating knowledge?

ARTHUR HOLLAND MICHEL: Thank you so much, Safiya. I think that is a phenomenal point to pivot on, this notion that if we want this stuff to stop, if we want things to be fair, transparent, safe, and accountable, it is going to take a fundamental shift.

That is where I am delighted to say that Karen's important work comes in. I encourage everybody to reserve that third spot on their shelves for a subscription to MIT Technology Review if for nothing else, then for Karen's wonderful reporting on AI and the AI industry.

I should say as a very quick aside that in this discussion we are focusing on some of the Big Tech giants, but I want you to extrapolate that thinking and some of these examples as we talk to something that is pervasive throughout the tech industry.

With that in mind, and given that we are realizing that we are in this moment that this shift has to be so profound, I am wondering, Karen, if you can give us a bit of a landscape of where the discussion is today vis-à-vis getting these companies to actually make good on their promises of ethical AI.

KAREN HAO: Of course. Thank you so much, Arthur. I have so many thoughts on this, so I am going to try to make it as succinct as possible and weave in as many themes as I can articulately. I also want to say that Meredith and Safiya have done such incredible scholarship and also set up this conversation so well today.

Drawing on what Meredith was saying about the history of how the Internet and the tech industry was built, it is on this idea of libertarianism, this assumption that there do not need to be rules, that all speech is equal, and that data is truth. That then trickles into the things that Safiya is talking about, where this tech industry is built off this technology that assumes that if you just feed data into a system, it somehow produces something of value and that it somehow is completely detached from all of the issues in society.

The challenge that we are facing now is where the tech industry is starting to realize: "Oh, wait. This is not necessarily true, and we need to be more ethical in the way that we approach technology." They have realized that on a very superficial and surface level of: "Okay, now let's produce technologies that have these nice buzzwords we can attach onto them like 'fairness,' 'transparency,' and 'accountability.'" But there is not a deeper understanding of where the problem actually manifested from.

So part of the issue is that AI as a field and as a technology is always talked about in the abstract. It is assumed that it exists in this ephemeral space that is completely unrelated to the humans and the earth and the material world around us. The problem then is when you are only talking about things in the abstract you don't go and tackle some of the root issues, like the fact that we keep seeing a technology like search AI continually manifest these racist or sexist ideologies because they are not actually tackling the issue from the cultural problem.

The culture that we live in is what generates the data that is fed into these machines that then spit out these racist and sexist ideologies. So if you just assume that the data is still fine and that the culture that we live in is still fine and you are just trying to tweak the actual algorithm, you are not going to get very far because the data that we produce is still coming from the same place of this very sexist and racist history.

What I have found in the landscape of AI ethics today is there is a lot of "ethics washing" because there isn't this grasp of what is actually means for an algorithm to be not just fair but anti-racist, and to be not just transparent but have people who are using it, who are subject to it, understand it. "Transparency" is an interesting term. Transparent to whom, and who is making it transparent? It is completely abstracted from the people who are making the decisions, the people who produce the culture that we live in, and the people who are then subjected to these algorithmic systems.

The same thing with accountability. Again, it completely abstracts these notions away from society, and so there isn't a very precise understanding of who actually are we talking about and what actually are we talking about to make these systems better.

ARTHUR HOLLAND MICHEL: As an aside because you did reference it, although in a very modest and indirect way, your glossary of buzzwords for the AI industry—it just came out I think last week, and it is a sadly humorous take on where things are with AI ethics today.

I would ask if you could perhaps give us a little detail on some of your reporting, for example, on Facebook and the story about how Facebook has become addicted to AI. You talked in there, for example, about its interest in de-biasing, so making AI fairer, being driven by a business case more than anything else. Could you tell us any more about what you found in that story and how it relates to what we are seeing and talking about today in terms of the societal underpinnings that we are up against?

KAREN HAO: This Facebook story was a nine-month investigation that I did into Facebook's Responsible AI Team, and this story began when Facebook reached out to me, saying that they were interested in having me do some kind of deeper dive into their AI work. As I was meeting with different readership within the AI org, I realized that Facebook had a Responsible AI Team, which was news to me, and it had had some form of one for three years. This was in the summer of last year, and I was intrigued by what on earth has Facebook's Responsible AI Team been doing for the last three years because a lot of the conversations we were having then are still conversations that we are having now about the way that foreign actors can weaponize these algorithmic targeting systems to interfere with elections and the fact that advertisers can use it to discriminate against different users.

What I found is that Facebook's Responsible AI Team epitomizes some of the flaws of the way that the tech industry has been approaching this idea of responsible AI. The initial people who started it all had very technical backgrounds. All were machine learning computer science people who were just waking up to this idea that math is not neutral, AI is not neutral, and when they started looking into what they should do about that, they then found these ideas—fairness, accountability, and transparency—and they made their Responsible AI Team have those pillars. I think their three pillars are fairness, privacy, and transparency.

Responsible AI is not defined by just some vague words. Responsible AI means that you need to understand what harms your algorithmic systems are perpetuating and figure out how to actually mitigate those harms. The problem was that Facebook plucked these three words out of the vacuous space that is AI ethics without actually thinking about how it interfaces with their technologies, and then they decided to pursue these three things. But if you think about what are the actual algorithmic harms of Facebook, things like amplifying misinformation and polarizing users, those are not actually addressed by these three buckets, fairness, privacy, and transparency.

Facebook was very excited to present to me their fairness work because that was the work that they felt was most developed and that they wanted to parade around the world and demonstrate that Facebook had finally taken responsible AI seriously. But as I was saying before, these words are vacuous because they don't have an understanding of power or context. Fair to whom? Fairness or any of these words have this squishiness, where you can interpret it in many different ways based off of essentially what is useful to the business.

In Facebook's case what happened was that, even before they built the tools to measure fairness and try to make fairer algorithms, there was already this idea of fairness that existed at the company within the Policy Team. Fair meant to them equal treatment of U.S. conservative and U.S. liberal users. One of the reasons why they adopted that specific definition is because it's great for the business. If you can continue to pretend that Facebook is neutral, that it is treating everyone equally, then you can ward off regulation, especially when there is a conservative government in power.

But one of the bad side effects of this vacuous interpretation of fairness is that then that interpretation directly undermines some of the ways that other employees were actually trying to tackle more deep-seated responsible AI problems like fixing misinformation. As I write about in my story, there was an AI researcher who I spoke to whose team was actively trying to develop AI to catch misinformation, but because in our political context in the past four years there have been a lot of conspiracy theories specifically attached to the conservative political leanings, these AI algorithms that were meant to catch misinformation affected conservative users more than liberal users, and therefore under their definition of fairness should not be deployed. So they were actively creating these systems and then having them taken out of the platform and prevented from being deployed because of "fairness."

I think Facebook is a bit more of an egregious example because of its power, influence, and size as a company, but these themes echo throughout the tech industry where the people that are driving their responsible AI initiatives do not necessarily understand what they are actually doing. They just pluck these terms from the general AI ethics discourse, and then in the end because of business incentives and other reasons those terms actually make everything worse rather than better.

ARTHUR HOLLAND MICHEL: I think with that we have put together a solid overview of what we are dealing with here.

What I want to do now is bring you all in to tie this all together, if you will. My sense is that in the discussion of AI ethics there often seems to be this assumption that everyone is onboard with the principles of ethical AI. To be sure, if you look at all the marketing copy, everyone does seem to be onboard with these principles. No one is going to raise their hand and say, "Actually, no, I don't want AI to be safe, fair, or transparent."

But the stories you all shared today tell a very different narrative. It makes me wonder whether there are actually fundamental and intractable ideological differences among the parties to this issue and that that might actually be the fundamental barrier that we are up against. To put it in plain terms, not everybody is onboard with this.

With that in mind, I would like to circle back to Meredith, but I want you all to come in on this, so maybe Meredith, Safiya, and then Karen. Am I right in saying this, or is that too pessimistic a take?

MEREDITH BROUSSARD: I think that there is a certain amount of truth here. What computers can do is math. That's all they can do. They are not human beings.

It was so much fun listening to Karen and Safiya because they are just right. That's all there is to it. They are correct.

But I think there are a couple of flawed assumptions at work in the AI narrative. The first is the idea that we are going to be able to write once and run anywhere and have AI or have algorithms that are going to govern speech in every topic in every country and in every community in the world. It is enormous hubris. It is not possible. One of the interesting things is the way that Big Tech digs into this idea that it is possible to do a thing that is not actually possible, but they keep pretending that it is.

Social media algorithms, for example, can figure out what is popular, but they can't figure out what is good because what's good is a social decision. It comes out of a social context. Popularity has been used as a proxy for good in a lot of places, but the thing is that there are a lot of things in the world that are popular but not good, like racism, conspiracy theories, or Robin burgers.

Good is something that is socially determined. It is contextual. Computers can't do that. They have never been able to do that. They are never going to be able to do that, so the belief that we can write an AI that is going to adjudicate everything is deeply flawed.

I think it goes back to this technochauvinist idea that somehow we are going to be able to build machines that are going to replace humans, that we are going to be able to build machines that are better than humans. Again, it's hubris. We should stop pretending that it is possible.

The case of self-driving cars is a good case of this because people have been pretending for many, many years that it is going to be possible to make a self-driving car, that the self-driving car is not going to go out there and kill people, and that we are going to have this glorious future. We were promised it in 2020. We were promised that AI was going to drive cars and none of us would be driving anymore.

I am still driving my car, and two people just died yesterday or the day before in a Tesla crash.

ARTHUR HOLLAND MICHEL: Safiya, I want you to jump in here.

SAFIYA UMOJA NOBLE: There is a reason why certain words get picked up and deployed and other words get left on the table. I want to go straight to the heart of why "fairness," "accountability," and "transparency." When we have used other words, like "oppression," "civil and human rights abuses," and "discriminatory," those words are left on the table by these same kinds of academic-industry partnerships.

I have been around a long time, so I remember when the fairness, accountability, and transparency movement started as a project under the Association for Computing Machinery, and a number of my former professors and colleagues became a part of that movement, which is a movement that was picked up by industry.

Some of the very people who were part of founding this AI ethics effort were in direct opposition to feminist scholars and critical race scholars, people who were using words like "oppression" and people who were talking about racial discrimination and sexism baked into these projects. They were not interested in the social dimensions of the problems. They were interested in making more perfect technology, more perfect algorithms, and that was the impulse for that moment in its early stages of organizing, that is who was in the room. They were looking at things like: "How do we distribute the chance of harm randomly across the entire population instead of having it just hit, let's say, African Americans?"

But the fundamental logics that Meredith is talking about, which is this temerity to think that you could somehow perfect the algorithm or perfect the technology really as a way to obfuscate or not deal with the kinds of things that we are talking about, which are issues of broad and deep global inequality, racial wealth gap, increasing economic inequality, which grows every single year that we record inequality—every year it grows greater than the year before—and disparity in the distribution of resources in our society. People who are interested many times in not talking about those things, which are considered social issues for others to care about, and are instead interested in how do we make a more perfect AI have the whole new industry that they can play in to again, as Karen so powerfully put it, ethics wash or not deal.

I have to say that ten years ago when I was working on the origins of the book and was using words like "ethics" it did not mean what it means today because there was no AI ethics industry. There was no cottage industry. So it was interesting to me. The context within which I was using it was to talk about structural oppression, but to lift up a word like "ethics," which really has no legal basis—it's about personal moral culpability or not, are you a good person or not in relationship to how you use these. It is a little bit like corporate social responsibility. It moves into that realm. It leaves the companies in a place where they can keep hyper-focused on the technology and they don't have to deal with other broader corporate practices.

For example, how is it that we have so much global inequality, and seven of the ten most well-capitalized companies on planet Earth are tech companies? What does it mean that tech companies don't pay taxes and offshore their profits? What does it mean that they use profoundly the public infrastructure and pay very little into it? What happens when Big Tech moves to your city, and everything is gentrified, sometimes just by the announcement and the speculation that they are coming to your town.

These are profound issues that I think when we focus on fairness, accountability, transparency, and ethical AI, we don't have to look at broader corporate practices. This is like the greenwashing movement of the fossil fuel industry or other kinds of industries. We have seen this a million times. We are going to have get serious and broaden up the conversation, and to me this is why it is important to have these conversations and to remember what we're really talking about when we're using these words.

ARTHUR HOLLAND MICHEL: Thanks, Safiya.

KAREN HAO: I want to jump in here because I love what Safiya was mentioning about these words and how the words themselves are problematic in the way that we talk about these things, and it reminds me of this passage from the book Data Feminism that I would love to very quickly read because it is so relevant. This was an excerpt that was pointed out to me by Ria Kalluri, who is a Stanford AI researcher and also co-creator of the Radical AI Network, which attempts to challenge a lot of these more vacuous approaches to AI ethics.

In Data Feminism, which was written by Catherine D'Ignazio and Lauren Klein, it says:

"Thus far the major trend has to been to emphasize the issue of bias and the values of fairness, accountability, and transparency in mitigating its effects. This is a promising development, especially for technical fields that have not historically foregrounded ethical issues and as funding mechanisms for research on data and ethics proliferate."

"However, as Ruha Benjamin's concept of 'imagined objectivity' helps to show, addressing bias in a data set is a tiny technological Band-Aid for a much larger problem. Even the values mentioned here, which seek to address instances of bias in data-driven systems, are themselves non-neutral as they locate the source of the bias in individual people and specific design decisions. So how might we develop a practice that results in data-driven systems that challenge power at its source?"

"The following chart introduces an alternative set of orienting concepts for the field."

This chart has two columns. The first one says "concepts that secure power" because they locate the source of the problem in individuals or technical systems. The second column is "concepts that challenge power" because they acknowledge structural power differentials and work toward dismantling them.

So in Column 1 it says "ethics." In Column 2 it says "justice." In Column 1 it says "bias," in Column 2 "oppression," Column 1 "fairness," Column 2 "equity," Column 1 "accountability," Column 2 "co-liberation," Column 1 "transparency," and Column 2 "reflexivity."

I love this passage so much because it cuts through a lot of the challenges with the field. AI ethics as a field today, even the language that we are using, is still building on this notion of libertarianism that founded the field initially. So if we shift the language in the tradition of abolitionists and in the tradition of feminists, we can begin to grasp at what we are talking about.

ARTHUR HOLLAND MICHEL: Several of you mentioned this notion of the "tech Band-Aid," the de-biasing Band-Aid. But as we covered in our session on the technical limits of AI ethics, even that "tiny little issue" of de-biasing AI presents enormous technical challenges that are dizzying the second that you get into them. Even to divert to technological solutions is potentially problematic too.

At this point, I am seeing that there are some fantastic questions coming in, obviously because you gave us such a wonderful picture of what we are dealing with here. What I am going to do is read a number of these questions in one go and open it up to you to pick and choose as you wish.

There was one question earlier about how the communities that build these systems, the tech industry, is getting more diverse in some respects in recent years and whether this is changing the intellectual outlook of the field at all.

There is also a question about the notion of whether there should be tough laws to enforce this stuff. If the industry isn't going to do this on their own, do we need to turn to regulation?

There is a question about where the three principles come from and whether there is an AI ethics industry. Tricky one there.

I am going to finish with this last one because I see another batch coming in, whether the people who use these tools are responsible. This esteemed audience member says that they can't get their friends to give up Facebook or Instagram and that they themselves still use Twitter and Google docs in that regard. Are they responsible?

I am going to throw that all out to you. Please, whoever wants to have a go at any one of those, you have the floor.

MEREDITH BROUSSARD: Let me jump in and say I am so glad we are talking about language and what are the implications for AI ethics and AI fairness because as a computer scientist and as a writer, I think a lot about navigating issues of language and issues of precision. When you are writing a computer program you are writing using a vocabulary and a grammar that the computer understands, and the underlying structure of a computer program is not unlike the underlying structure of a piece of writing.

But in programming, precision is incredibly important. If you have a semicolon missing, then the whole thing blows up or you get a totally different result. So you have to be very, very exacting about your words. This is one of the things that is satisfying about it, honestly, because you do all this precision, and then you get this huge rush when it actually works because you have been banging your head against the wall for so long.

This kind of precision that you need in order to do computer science is in many ways different than the kind of precision you need for writing, because in writing we can say something like "fairness," and the audience will come and bring their own interpretation to it. The concept of fairness will trigger thinking on behalf of the reader, and the reader will bring this whole wealth of experience to it and will have this wonderful mind-expanding experience.

It is not the same in computing. When I think about the way that you would need to program for fairness I think, Oh, yeah, that's really hard. It is a hard computational problem. In fact, it is not actually possible.

What I think about here is an extended metaphor around fairness. I think about a cookie. When I was little and there would be one cookie left in the house, and my little brother and I are going to have a fight about that cookie. If a computer were sent in to adjudicate this fight, the computer would say: "Okay, well, mathematically what's fair is we just divide the cookie in half, and each child gets 50 percent, and then everybody is going to be fine, and that's how we should solve it." And that's true. That is the mathematically fair way to solve the problem of a scarce resource.

But in the real world, when you break a cookie in half it doesn't actually break 50-50, it breaks into a big half and a little half. So if I wanted the big half, I would say to my brother: "Okay, you take the little half now, and I will let you pick the TV show that we watch after dinner." My brother would think about it for a second and say, "Okay, that sounds fair." So it was a socially fair decision.

Mathematical fairness and social fairness are not the same thing. You can program a computer for mathematical fairness, but you can't program a computer for social fairness. When, as Karen said, we change our frame to justice it gives us a totally different way of thinking about this, and thinking about the cookie gives us a good way of thinking about what is impossible and what is possible with computing.

SAFIYA UMOJA NOBLE: I will jump in just to add on the issues of taking up a little bit of a different question. I think certainly regulation is crucial because the industry writ large left to its own devices will do whatever is most profitable because that is their mandate. We don't want to forget that publicly traded companies are beholden to their shareholders, and that means they are also beholden to maximize profit for their shareholders. That creates quite a conundrum when these technologies are used in ways that are motivated by purely the profit imperative first but they are used to displace other kinds of public democratic institutions.

We see this all the time, for example, when teachers and parents, instead of having students go—again, a plug for the libraries—to the library or to ask their teacher or their professor to have access to the university, there is instead a new culture of just Googling it. That means that large-scale advertising companies again step in as a proxy for other kinds of knowledge and public goods writ large.

We can see this, of course, in the time of COVID-19, a pandemic that laid bare how fragile so many of our public goods are and the fact that most searched-for terms in Google are always around health. What does that tell us? It tells us that Google Search is the medical public health proxy as a stand-in for not having a public health system that everyone can participate in.

There are so many ways that these companies are working as a proxy for the public goods that they have been directly responsible for defunding by not paying taxes again. We have to have some type of regulatory environment that looks at that, the tension and the balance between certain sectors of the economy, and what is good for the social welfare of the publics in this country. Of course, other countries are doing this. The United States is way behind.

We also have to ask ourselves: Why is it that the United States can be—and this is borrowing from the work of people like Professor Danielle Citron, a law professor at the University of Virginia, who helps us understand how the United States is a safe haven for some of the worst kinds of tech companies in the world. If you wanted, for example, to open up a company that is just for the express purpose of stalking people, bullying people, and engaging in nonconsensual pornography postings or what we colloquially understand as "revenge porn," the United States is where you open up those companies. Like the Cayman Islands is a tax haven, the United States is a haven for the worst kinds of tech companies.

All of that is part of it. We are not just talking about the branding technologies. We are talking about how the sector works with a lawless type of culture and set of imperatives that I think we can see undermine democracy, increase threat and harm, and maximize the potential for hate crimes and other kinds of acts of violence in our society. We are going to have to reckon with that and contend with it. There is no getting around it.

Of course, we have not even begun to scratch the surface in this conversation. We could spend a whole other hour talking about predictive analytics and predictive technologies that are all about classifying and sorting everyone in our society into opportunity and out of opportunity. We are seeing evidence of this in banking, finance, mortgages, education, access to admissions into college, and distribution of healthcare. I could go on and on.

We are I think at an incredibly important moment in history where we are going to grab hold of this and contend with it. To me it is akin to the Civil Rights Movement of the 1960s. We are in that kind of a struggle right now where we are fundamentally dealing with rights and opportunities and enfranchisement and disenfranchisement and power and domination. We have a responsibility to make these concerns legible rather than natural and normal.

MEREDITH BROUSSARD: I want to totally agree with what Safiya said and also point out that regulation is what we need right now, because for many years we have been under the delusion that tech companies have given us that self-regulation was the way to go. Self-regulation has failed. We gave it a chance. It has profoundly failed, and now it is time for government regulation to ensure that people's civil rights are preserved inside tech platforms.

KAREN HAO: Just to jump in here, to summarize what we have been talking about for over an hour now, as you try to unpack what is wrong with AI today, inevitably the conversation goes to what is wrong with society today. To return to the question that you initially asked us: How feasible is it for us to actually develop ethical AI today? Probably not, because we have not yet arrived at a society that has the values that we would want to codify into AI to supposedly bring us this future.

For me personally I am optimistic. The AI community and the people who are building AI is a very nebulous thing. The walls of what defines this community are expanding a lot. There are more and more people coming in from non-technical backgrounds and from ethnically diverse and socioeconomically diverse backgrounds, and through expanding the people who are involved, my dream is that everyone in the world would be involved ultimately in building this technology. We can have a much deeper discussion and push and change the industry, transform our society and this technology toward something that is more ethical, more just, and all the things that we have talked about.

ARTHUR HOLLAND MICHEL: It feels in a certain way that part of what we are getting at, to bring things full circle, is that just as the technology itself is a proxy for the people who made it, the solutions are necessarily perhaps a proxy for the people who get to or want to implement them, and not everyone is perhaps in agreement as to what the best way to do that is, and if there isn't that alignment, we have a challenge right there.

I want to throw a few more questions at you. They are coming in like lava in the chat. We have an interesting one: Which do you think is more of the problem—this is an A/B question: A, there is still a huge gap among ethicists, philosophers, and people in the tech industry? Or, B, is the bigger problem that in many ways the discipline of ethics and philosophy itself is pretty uncollected, biased, and colonial?

As a quick follow up to that, what is a dream course or set of readings that you would want to get your students to engage with in order to become future "AI justice designers," and how does that interact with the point about the philosophy and ethics field?

SAFIYA UMOJA NOBLE: I loved last year. I taught in one of my classes to my students in Information Studies, Charles Mills' The Racial Contract. Mills is a MacArthur Genius and an incredible philosopher. One of the things that was eye-opening for them is they did not understand, let's say, the origin story of Western Enlightenment thinking and how profoundly implicated it is in being used to do things like uphold trans-Atlantic slavery or the institution of enslavement of African peoples and used as a justification for Westward expansion and occupation of indigenous people's lands throughout the Americas.

Part of what is challenging—and I have actually seen computer science faculty try to teach ethics in computer science in response to these—it has been fun to go around Twitter, where they are teaching enlightenment theories, and many of us are like: "That's not it. That's not what we're talking about."

There is a lot of excellent critique of the way in which philosophy is an incredibly troubled field, at least in the way that it is taught in the United States. Going back to that origin story actually is part of the problem, even if you go back into Western civilization—Aristotle and Plato—there are many critics outside the United States who would argue that our hyper-reliance, for example, on Aristotle and binary logics and the racist and sexist origins of classification logics are part of the underlying problem that we have with AI in the United States now.

So there are different points of view about this, different ways of coming to know and think about AI. When I talk to my Chinese colleagues, for example, who are computer scientists and work in machine learning outside of the United States, they will tell me that part of the problem is our own orientation philosophically toward the way we organize AI logics.

I think these are philosophical questions, and there are many different philosophies one can draw down from around the world, and people in fact do. So I think we want to remember that this is also where the flattening of these concepts, like just math, also make it difficult to apprehend that philosophies, many of them, are racist, sexist, and binary in nature in their logics, some of them even arguing for the dehumanization of anyone who is not white or of European descent. You are going to have to contend with that if you want to talk about the origin story of philosophy that underlies some of our principles.

ARTHUR HOLLAND MICHEL: Here is a little more "mid-air refueling," if you will, to carry us to the end of the hour. We have a question about how you can regulate these companies if they are becoming as powerful as and starting to behave indeed like governments. There is also a question talking about whether, for example, race and facial recognition is indeed the right direction as opposed to whether we should be talking about using facial recognition at all.

On the tail end of that there is a question around whether one of the fundamental issues here is privacy regulation and if more should be pushed into that, whether it is achievable and foundational, thinking about issues that have not actually come up so far in our conversation, around, for example, consumer rights and control over the data that is collected about us.

I am going to leave it there because I want to give you all a chance to have a stab at anything else that has come up in the Chat, but I also want us to hold off for the last three minutes of discussion because I have one last exercise for you, if you will. But at this point I will open it up. Does anyone want to address any of what has just come in from our dear audience?

MEREDITH BROUSSARD: I can take the facial recognition one. What was the one before facial recognition?

ARTHUR HOLLAND MICHEL: About these companies being as powerful as governments.

MEREDITH BROUSSARD: Yes, about regulation.

It is a problem, and we also need to look at the way that the Big Nine tech companies are spending enormous amounts of money on lobbying. In terms of who should write the regulation, I am right here. I am waiting to do it. I am ready. I am volunteering. I have been volunteering for a while, so hopefully I will just keep saying this, and somebody will take me up on it sooner or later. Independent people writing regulation—right here, and it needs to be done.

In terms of facial recognition, I'm assuming everyone has seen Coded Bias, has seen how terrific Safiya is in Coded Bias

SAFIYA UMOJA NOBLE: And Meredith.

MEREDITH BROUSSARD: Thank you. Lots of fun.

Joy Buolamwini's work is the epicenter of Coded Bias, and Joy Buolamwini is the researcher who brought it to the world's attention that facial recognition systems discriminate, that they are better at recognizing lighter skin than darker skin, they are better at recognizing men than recognizing women, and that they totally ignore the existence of trans, non-binary, and other gender nonconforming folks. Deeply problematic, racist, discriminatory.

Lots of people say: "Okay, the way to solve this problem is to put more people of color into the training data sets that are used to train the facial recognition algorithms." Joy's work is important because she says: "No. That will make the facial recognition algorithms better at recognizing people of color, but really we should not be using these systems at all."

So she pushes the argument forward into an argument for justice. The reason we should not be using these systems is that facial recognition systems are disproportionately weaponized against communities of color, against poor communities in policing, and the way that we make a world that is more just is not by making these systems better; it is by not using these systems at all.

We need to push the conversation beyond "Oh, how can we improve the technology?" and question the uses of technology in the first place and examine what kind of a world do we want to create, and do we want to be using these technological systems to continue to discriminate and oppress, and I would argue, no, we don't, and we shouldn't.

SAFIYA UMOJA NOBLE: I think we will definitely see more consumer rights and civil rights orientation in the regulation framework. Most regulation right now is focused on data privacy in particular, even the General Data Protection Regulation in the European Union, but I think as more evidence comes to light around things like algorithmic discrimination as well as other types of concerns that may afford the public the right to sue for damages. I saw that question in the Chat. I think we will definitely see—New York has been a perfect example and even Texas now. New York's state Attorney General Letitia James, don't mess with her. Any companies that in New York are discriminating algorithmically in that tech sector are on her watch list, so I think we will see more at the states level.

This is important because the states are places where consumers or people in the country can sue for damages, and we certainly need that. This is one of the things that Cathy O'Neil in her great book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy says. At the time she was writing the book, she argued that you can't take an algorithm to court. This is a challenge. So the framing of the kinds of harms that we have been talking about and so many more that we have not even addressed yet, definitely I think we will start to see more.

We are seeing members of the Congressional Black Caucus, Representative Yvette Clarke, we are seeing Senator Cory Booker, many who are looking at algorithmic accountability in terms of bringing new legislation into place. All of our favorite—Federal Trade Commissioner Rohit Chopra, who has been amazing even in a minority voice and opinion on the Federal Trade Commission around things like the unfairness doctrine and penalties that companies will have to face.

I think there are mechanisms and spaces already government-wise to do that work. Part of it will be the will of the people who work in those agencies and as our representatives to enact and enforce, but also part of it is the role of journalists like Karen, who can translate what is at stake here to broader publics so that the public can apprehend and understand these conversations and understand how their own consumer and civil rights are being violated.

ARTHUR HOLLAND MICHEL: I know that the topics we have discussed today do not necessarily leave us feeling optimistic or cheery about the future. As you have all so brilliantly pointed out, there are tremendous challenges ahead. But in our last two minutes, I want to ask each of you to share with us something that does give you hope and optimism in this regard.

Meredith, I saw you had a very cute little puppy in your video, so it can be that too. But please do end us on a note of what does give you optimism.

Karen, I am going to hand it over to you first in that regard. Go ahead.

KAREN HAO: I think what makes me optimistic is the fact that we are even having this conversation. As both Meredith and Safiya have mentioned, this is not a conversation we could have had a few years ago, so the fact that we are talking about it openly, that we are identifying these challenges is the first step to actually solving them, and I am grateful for that.

SAFIYA UMOJA NOBLE: I will give Meredith the last word. I am excited to see so many more students—in particular I see this at UCLA—who want to be involved in these conversations, more engineering students who come and take classes from those of us who teach more of the sociological dimensions of computing and technology.

There is definitely a lot of energy in this generation that are undergraduates right now around these conversations, which is also very different. Ten years ago my students were in complete denial that any of these things were real or possible, and they considered them one-offs. Now my students are like, "We see it, and not on our watch." That is for sure exciting. I think it is going to be incumbent upon the rest of the world to catch up, so to speak, and support that energy.

The truth is that we are living in a time with so much economic and social and political precarity that something has to give, and we should not be rushing toward a technocratic future where we score those who are worthy of having food, those who are worthy of being able to cross a border and travel and those who are not, and those who are worthy of having shelter and those who are not. We are going to have to deal with the fundamental issues that the technology is over-determining, which are issues of deepening inequality. I think as we attend to these technical issues, that is not the point. The point is what is happening in our societies around the world and how does technology exacerbate or mitigate. Those are the kinds of questions that I think are on our minds and increasingly on the minds of others.

MEREDITH BROUSSARD: I am going to totally agree with Safiya and with Karen. I have a pandemic puppy on my lap here, and that is something that gives me hope and inspiration. It is also very grounding. I think that staying grounded when talking about issues of AI is very, very important. It is extremely important to remember what is real about AI and what is imaginary about AI, because when we get too deep into the imaginary that is when things start to go off the rails, when we start investing too much in imaginary and unreasonable futures.

I think the thing that I will say is the most inspiring is, as my co-panelist said, the fact that we are even having this conversation at all, because ten years ago this conversation didn't exist. Even probably five years ago this conversation didn't exist. Talking about it is the only way that we are going to get to any kind of collective understanding about what are all the dimensions of the problem and how can we start trying to attack this problem not just computationally but also from a social aspect and also from a regulatory angle a well.

Thank you for a great conversation today.

ARTHUR HOLLAND MICHEL: Thank you.

This is the moment in the evening in ordinary times that we would all retreat to the cocktail hour, the true event. Alas, we cannot do that, but we will have to look forward to reconvening in the not-too-distant future.

In the meantime, I encourage everybody assembled here today to please follow the very important work of our esteemed panelists and of course to also join me in giving them a very vigorous virtual round of applause. I also will just say that the transcript of this event will be available on the Carnegie Council website in the next few days, along with a video, as well as obviously a phenomenal calendar of upcoming events that I urge you all to join.

With that being said, I am going to sign off and thank you all once again so much for being here. This has been an absolute blast, and I look forward to seeing you again soon.

Take care.

View materials from this Event