CREDIT: <a href="https://pixabay.com/illustrations/artificial-intelligence-brain-think-3382510/">Pixabay</a>
CREDIT: Pixabay

AI & Human Rights: The Practical & Philosophical Dimensions, with Mathias Risse

Aug 7, 2019

Mathias Risse, director of Harvard Kennedy School's Carr Center for Human Rights Policy, discusses the many connections between artificial intelligence and human rights. From practical applications in the criminal justice system to unanswered philosophical questions about the nature of consciousness, how should we talk about the ethics of this ever-changing technology?

ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.

This week I'm speaking with Mathias Risse. He is Lucius N. Littauer Professor of Philosophy and Public Administration and director of the Carr Center for Human Rights Policy at Harvard’s John F. Kennedy School of Government.

Mathias and I spoke about his fascinating article "The Future Impact of Artificial Intelligence on Humans and Human Rights." This appears in the summer issue of Ethics & International Affairs, Carnegie Council’s quarterly academic journal. It was co-written by Steven Livingston, director of the Institute for Data, Democracy, and Politics at The George Washington University.

Mathias and I had a wide-ranging talk on artificial intelligence (AI). We started with some helpful definitions and touched on the ongoing political and human rights discussions. We spoke about real world, practical applications of this ever-changing technology and ended with some deep philosophical questions about the nature of the human brain and consciousness.

For more on this article and any other journal-related content, you can go to ethicsandinternational affairs.org. The summer issue features a roundtable on AI and the future of global affairs. For more on this subject, you can check out a podcast from last winter with bioethicist Wendall Wallach on the governance and ethics of AI.

Special thanks to Adam Read-Brown, managing editor of Ethics & International Affairs, for his help in setting this talk up and formulating some of the questions.

For now, calling in from Cambridge, Massachusetts, here’s my talk with Mathias Risse.

Just to get us all on the same page, I thought we should start with a general definition of artificial intelligence. How do you define artificial intelligence for the purposes of this article and our discussion?

MATHIAS RISSE: Let me say there are three different ideas that I think we should have on our radar at the same time. There is artificial intelligence. Then there's also the second idea, machine learning, and then the third idea, big data. The problems that we currently find vexing and the challenges that we find also in the human rights community very much come at the intersection of these three topics, so let me just briefly speak to each.

Intelligence as such as I understand it generically is an ability to make predictions about the future and solve complex tasks. This is a day-to-day understanding of intelligence. Then, artificial intelligence is that kind of ability demonstrated by machines. So, there's natural intelligence and there's artificial intelligence as in we construct the devices that show intelligence.

In that domain, artificial intelligence, we have separation between artificial general intelligence and then artificial specific intelligence. The general version, artificial general intelligence, is basically machines start at a very large range of issues and topics and basically mimic human intelligence. Something like that we do not have at this stage, but it is a future possibility.

What we do encounter quite a bit in our day-to-day life is artificial specific intelligence, machines that are very good at this thing, a local version of intelligence. Basically, the smartphones, tablets, laptops, and so on, self-operating vehicles, robots, are all intelligent in this local way, so we encounter a lot of this artificial specific intelligence, but we're not encountering artificial intelligence in any state. That's the domain of artificial intelligence.

The second idea is machine learning. Basically, "machine learning" is a word that is used for the mathematics behind all that. One core subpart of that in turn is a sophisticated set of algorithms that is getting ever better at teasing out patterns and regularities from data. Many of the tasks that we describe as specific artificial intelligence is ultimately reducible to tasks of that sort, of detecting and teasing out patterns and regularities. In recent years, the math behind that has become ever better.

The third idea is big data. Big data simply means there is this humongous amount of data that is being generated these days through any kind of activity involving electronic devices. Basically, every time you interact with a device that's networked, that's wired and connected in some way, you are generating data, and they are landing somewhere, they are ending up somewhere. There is just this humongous amount of data that each one of us basically generates every day, and that from the point of a view of a company or a state actor where these data can often form this massive, massive mountain of data.

It's really at the nexus of these three themes—artificial intelligence, machine learning, and big data—that our questions and challenges arise right now because without this massive amount of data the machine learning mathematics wouldn't have any regularities to tease out, and without these regularities then any thinking about either specific or general artificial intelligence would be a nonstarter.

ALEX WOODSON: You mentioned artificial general intelligence, saying that we're not there yet and that it's kind of theoretical at this point. But how far along are we in this technology with these three ideas that you've discussed? How dramatic are the changes that you've seen over the last few years?

MATHIAS RISSE: The simple answer to that is nobody knows. In fact, the current situation in this debate is that it's often the people coming from the computer science corner, the engineering types, the programming types, who tell us that everybody else who is concerned about the societal impact of these new technologies is basically getting too worried, getting too excited, getting too alarmist when it comes to talking to the public about these matters; because they say that they do not know how to produce artificial general intelligence, and there's no time horizon on which they see themselves, but that may eventually be there.

Not being near that in a predictable chronological manner is one thing. Another thing that is true also is to realize what we have been seeing over the last decade is that the speed of innovation, the rate of technological progress, the velocity with which new technologies enter our life seem to be ever-accelerating. Think about what life was like without smartphones, and now so many people are walking around with them, and they're bringing revolutionary changes all over the world.

The point is that we need to think about many questions now and have them on our radar now when they are still within the domain of possibilities, even though it is also quite a possibility that something like a general artificial intelligence worthy of that name will never exist; even though myself, being a philosopher, simply takes note that the overwhelming fear of experts in that field seems to think that within the next two decades we are very likely to actually get to that point.

ALEX WOODSON: To get to the human rights part of our discussion I think we need a couple more definitions. You, along with Steven Livingston, do a great job of explaining in your article the definitions of "explicit" and "full" ethical agents and the differences between them. I think we should go over that next.

MATHIAS RISSE: Yes. Let me give some context for that.

First thing, for a moment, about what we are doing when we're talking about human rights. In 1948 we got the Universal Declaration of Human Rights, and that is a document that says that every single human being has an entitlement to certain protections and certain provisions, every single human being. It's in a way an enormously emancipatory document. It says that any number of versions of abuse that have historically been normal, any number of versions of disadvantaging people, classifying people into stratified hierarchies where some really just didn't matter terribly much, none of that is okay. All human beings have a certain status.

Of course, that was an amazing step of progress. Something like that at such a grand scale—part of the United Nations—had never been done before. But we are now getting into things—there are obviously some interesting questions that arose in this context. For example, where does this leave animals? How do we think about the protection of animals, who were not the subject of the Universal Declaration?

But now as we're looking forward and really looking deeper into the future, as technology gets more sophisticated, as we might come closer to something like an artificial general intelligence—or also as we might have interesting mixed existences of cyborgs who might be machines with organic materials implanted in them or human beings or creatures that in a way started out as human beings and are technologically enhanced in many ways—as we're thinking about all those kinds of possibilities or simply the possibility that we will have intensely networked systems, machines that we simply can't turn off easily anymore when we get bothered by them or we don't need them anymore, as we think about all that we're getting questions about what kind of moral considerations might these kinds of devices—let's just call then "entities" to have a neutral term—would they themselves merit at some point?

Some people are very progressive about that, and they say, "Well, look, at some point, they will approximate human beings ever more and more," because ultimately you could say everything that humans do can be reduced to some kind of intelligent task of the sort I described before, and eventually we'll be able to put sufficiently many of them together in a clever way so that machines can actually mimic humans.

This line of thought says if at some point they are just different from us in the sense that we are made of carbon and they are made of silicon or some other material, then we would be a kind of carbon chauvinist by not treating them with the same respect and responsiveness as we treat other humans. That's one radical view there.

Another view is more dismissive about that and would say, "Well, there will always be relevant differences. There will always be a difference of being the kind of thing that humans are, the possession of a consciousness or mind." Some people, of course, think there is a soul, so there will always be a marker of humanity that machines will not partake in, so to speak.

This vocabulary here, the distinction of being explicit and full agents, that's part of a set of terms that was actually introduced by James Moor, who is I think a retired professor now at Dartmouth, who was one of the early pioneers at this intersection between computer science and ethics.

An explicit ethical agent is basically an entity that does everything we normally see humans do, including decision-making, consideration to other things around them, so basically it looks like but phenotypically is indistinguishable from a human agent. That's an explicit ethical agent.

Then, a full ethical agent is the type of explicit ethical agent, an enhanced one, namely an explicit ethical agent who also has these kinds of metaphysical features that we usually attribute to ourselves, namely consciousness, intentionality, or free will.

That's a differentiation among different types of agents, and then you can use that distinction to ask questions: What kind of moral considerations do agents deserve if they are just explicit ethical agents but not full ethical agents, or is it possible even to be an explicit ethical agent without being a full ethical agent? So, there are some interesting inquiries that you can then conduct with this distinction.

ALEX WOODSON: Moving this to specifically the human rights community, AI is definitely something that you're thinking about in terms of human rights, but what's the status of this in the larger human rights community? Is this a discussion that people are engaging in at the moment?

MATHIAS RISSE: Yes. It has hit the human rights community massively in recent years, and it's easy to see why. If you think about again the Universal Declaration of Human Rights from 1948, this is a list of rights. There are 30 articles in there that cover topics from discrimination to judicial procedure, basic components of democracy like free speech, and rights to political participation.

Then you have to ask basically for every era what actually is this about, so what are the concerns that are addressed by these rights, and every era gives different answers to that because the social context and political context in which we live is different and is shaped by technology.

To give an illustration, discrimination is a big topic right at the beginning of the Universal Declaration. Discrimination these days happens quite a bit in terms of how data are collected, how data are analyzed. The big concern is that basically the data that are reflecting a highly discriminatory path, in particular, a highly racist path, that analyses drawing on these kinds of data will shape the future in a similarly discriminatory and particularly racist manner.

Then you have the question, what does protection against discrimination mean? What does discrimination look like these days, and this leads to reflection on technology and similarly for every right. Right to free speech these days is something that needs to be understood in the context of Internet platforms, and there are questions of content moderation, for example, coming in.

So, there's a varied definition of the context in which a right needs to be understood and the kinds of dangers to its exercise that arise. That is very much something that needs to be understood in a technological context, and increasingly the human rights movement has been seeing that in recent years.

Then, of course, the human rights movement is a civil society movement that often reacts to governmental overreach, governmental abuse, and these new technologies, of course, create enormous possibilities for governmental overreach. Think about this massive scale social scoring system that the government of China is in the process of developing, or the cyber interventions that seem to be coming out of Russia and probably also North Korea, so there are governments manipulating things, creating basically artificial stupidity rather than artificial intelligence.

Human rights organizations and human rights centers at universities are very much reacting to that, and they try to think of ways of not just revealing all this to the public but also to think of defense mechanisms there really. All of that has arrived in the human rights community on a big scale.

That doesn't mean that everybody in the human rights community is interested in this. Not everybody is convinced that the agenda really is set by technology, and they do other kinds of work, so not everybody is involved in the tech work, and that also just by itself is not a problem at all, but I think the overall tendency is that the world of human rights is massively owning up to both the dangers and possibilities that are coming from technology.

ALEX WOODSON: Moving beyond the human rights community, it sounds like there's a good discussion there that's going on. Do you see that in the policy world as well? Do you see governments taking this issue on? You mentioned these huge things that China is undertaking. What's your sense of how policymakers maybe in the United States, Europe, and throughout the world are reacting to these types of issues right now?

MATHIAS RISSE: It's a very mixed set of impressions that one gets. One background phenomenon here that one really needs to acknowledge is that companies these days, especially the high-tech companies that are operating from the United States, have developed a level of sophistication with the technology that often takes governments a long time to catch up with. I think this is so in the United States and also in many European countries.

That is a problem that companies—especially these very powerful and large tech companies that already are more influential, more important, and wealthier than the energy companies of old had ever been—these companies are basically a couple of generations ahead of what governments are capable of doing, of what governments are capable of understanding, even. I think that is especially a problem in the United States right now given that our current government is so singularly uninterested in regulatory approaches and in mapping out future trajectories. It's happening in somewhat better ways in some European countries, but the discrepancy remains there. There is simply a lot of sophistication and energy in companies that governments have a hard time catching up.

Whether that's a good thing or a bad thing is also an interesting question here, because if you do have a government like China or Russia that does take an active interest in this, then you can, of course, also see the massive dangers that come out of that. Once governments are really understanding and investing their people power in the development of technologies for their purposes of ruling their people, the possibilities are endless for governments, and from a citizens' standpoint, from a human rights standpoint, that is scary.

It's interesting times in this way that the relationship, this triangulation between governments, companies, and technology is very complex and very different across different countries.

ALEX WOODSON: I'm interested in some more specific examples that you're looking at, more things that are happening right now that you're monitoring. What's something that if someone is interested in the connection between AI and human rights—we talked about China's Social Credit System—what are some other issues that someone should look into?

MATHIAS RISSE: My own research interest at this moment, let me talk about that first for a moment, then I'll draw attention to what a couple of other people around me here do.

I'm very much interested in data ownership right now. Going back to the beginning of our conversation, there are these three ideas of artificial intelligence, machine learning, and big data, and many of the challenges in the human rights community that are upon us already—so not the issues about will there ever be sophisticated explicit or even full ethical agents that are not made from carbon, but these are long-term issues that need to be on our radar now, but are not vexing us at this moment.

But what is vexing us now is what is being done with data, and the key question there is, who actually owns this data? When you do anything online, when you work with the GPS in your car, when you touch your phone, all of that generates data. Who owns this data?

There is an interesting debate going on there that ranges from, "Well, individuals always own their data. All data that we generate always belongs to us." That's one extreme view. Another extreme view on the other side says, "Well, the data clearly belong to those who are providing the platforms and the devices who have basically come up with the technology that enables the generation of these data in the first place." That's obviously a tremendously important question right now because the distribution of economic power and influence in the future will very much depend on who owns the data.

I'm developing a project where I want to argue that what matters about these data is that they are regularities. The usefulness of data for companies or for government is that they can make predictions about what will happen next in the life of one particular person or in the actions of one particular person based on lots of data generated by relevantly similarly people, and I want to argue that these regularities that they are collectively producing also need to be democratically owned; they belong to the ensemble of people who generate them rather than either the individuals who do that or the companies that do that. That's what I'm interested in.

The bigger issue here really is ownership of data. Who owns these data?

Here I'm the director of the Carr Center for Human Rights Policy at the Harvard Kennedy School, and we have started this project on human rights and technology. We have 15 fellows, and I'd like to mention what a couple of them do also in this data domain. One of the fellows is Mark Latonero, who recently published an op-ed in The New York Times, and that's much concerned with data collection as part of humanitarian relief efforts. In exchange for receiving humanitarian assistance, often refugees have to provide data, and then all sorts of things can be going wrong in the data management, and some kind of confusion can wreak massive havoc down the line. It also creates a power differential. All of a sudden, a lot is known about these people, and they have to make it known in order to come by basic things that should be guaranteed by their human rights. That's also a kind of data-related issue.

Then, to mention one last example, we have a wonderful mother-daughter team among our fellows, Teresa Hodge and Laurin Leonard, who are concerned with developing a kind of data algorithm that helps with the reentry into the workforce of people released from prison. They often have a hard time getting their feet on the ground because basically the kind of information that's reliably available to would-be employers is just the fact that they have a prison record, and what their credit record is, which is often not good for people who come out of prison for various reasons. So, they developed an algorithm that generates more informative data about these people so that potential employers can learn more about these people than just their credit score and criminal record.

These are the kinds of issues that are currently on our radar here at the Carr Center among a number of others.

ALEX WOODSON: That's great. At times this conversation gets very theoretical, a little abstract, but these are very real-world issues that people are dealing with, so I think that's a great way to apply this technology.

The last question—I have to credit Adam Read-Brown, managing editor of Ethics & International Affairs, for this question. It's a little deeper than I would normally go, but I think it's a great way to wrap up the conversation.

Some people argue that moral machines will never exist, that machines will never have consciousness, and we'll never really need to worry about their ethics. Is this a difference in philosophy as compared to your view or where you think this technology might be going, or is this a different assessment of the technology altogether? What's the difference when you really break down these two points of view?

MATHIAS RISSE: Of course, there's always a question of technological feasibility involved: What will we actually be able to build with a reasonable amount of resources? There's always a question of what is in principle possible that we can do and what kind of activity, what kind of design, what kind of technological exploration is actually warranted and merited by whatever payoff and rewards and insights you'd be getting that way? So, there is a feasibility question.

But then there is actually also a very interesting philosophical question here, and that is actually—by training I'm myself a philosopher, and one fascinating thing about this whole debate about technology is about how many issues that seem to be rather arcane issues for the philosophy seminar room are basically circulating back in debates that we now have about the future of technology, many of them not concretely upon us like these other ones that I just talked about—about former prisoner reentry and things like that—but questions that distinctly are on people's radar these days, and this question about what machines ultimately are capable of is one of these.

The domain of philosophy that's concerned with that is the philosophy of mind, a domain of inquiry about what a mind actually is. Crudely speaking, there are two general orientations here. One orientation is that whatever we perceive as the mind is basically a kind of epiphenomenon, an add-on, something that grows out of the brain and the body. There's nothing more than physicality, physics and material things—lots of cell structure, so to speak—so there's nothing independent that is a mind. There is only one thing. There's only matter.

The other type of view is that there is not only one thing but there are really two things: There is matter, and then somewhat independently there is mind. The best-known version of that is that in addition to the body there is a soul. So, God provided us with a soul or the world somehow endows us with souls, and for some period of time a body and a soul live together in the same shell.

Another version of that, a less theological version, is that consciousness is somehow its own thing in ways which we don't quite understand, but there is matter, and there is consciousness, and they have ways of coming together. So, there is enormous disagreement about this. There is a range of well-known positions, but the disagreement is longstanding.

As far as machines are concerned, of course it matters massively which of these views is true, but my own sense is that actually in neither of these views do we have any reason to think that machines could not become full ethical agents much as we are. If you think there's only matter, and minds are add-ons to matter anyway, then maybe at some point we simply can produce machines that also have a mind as an add-on. If that's all there is to us—material objects that can be reproduced technologically—then in a way as part of that process there will also be the same mind capacities that we already have.

But even if you think there are these two things, a body and a soul or a body and consciousness, we don't quite understand in any interesting way how we ourselves come by them: How do I get inhabited by a soul, assuming I have one? How do I get inhabited by consciousness? How do they come to me?

Given that we don't know that, we have really no reason to think that machines couldn't be of a sort at some point that either souls or consciousness would come to them. There's no reason to think that from a certain level of sophistication on machines wouldn't themselves be of that kind. It's an exciting set of questions there, really.

ALEX WOODSON: Yes, definitely. As Adam said, talking about your essay, you could pretty much open to any page and have a long conversation about it.

Is there anything else you wanted to cover?

MATHIAS RISSE: Maybe let me say one thing by way of concluding, since we discussed a number of short-term and long-term questions.

There's one theme in this whole debate that I find a bit problematic, namely when people either say, "Well, the only questions that really matter are the long term questions," or they say, "Oh, these long-term questions don't really matter; let's focus on the here and now." So, there is a bit of competition for relevance, and I think that's a not a good attitude to have.

I think we need to understand that there is quite a range of issues that are upon us now but that technological change may happen at such a fast pace that even things that look rather futuristic to us now might be very concretely on our plates fairly soon, and that should not be the first time that we're thinking about them, so I think we need to have this whole range of questions on our radar at the same time without having the spirit of competitiveness among them.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

APR 11, 2024 Podcast

The Ubiquity of An Aging Global Elite, with Jon Emont

"Wall Street Journal" reporter Jon Emont joins "The Doorstep" to discuss the systems and structures that keep aging leaders in power in autocracies and democracies.

APR 9, 2024 Video

Algorithms of War: The Use of AI in Armed Conflict

From Gaza to Ukraine, the military applications of AI are fundamentally reshaping the ethics of war. How should policymakers navigate AI’s inherent trade-offs?