Beneficial AI: Moving Beyond Risks, with Raja Chatila

May 15, 2024 70 min listen

In this episode of the Artificial Intelligence & Equality podcast, Senior Fellow Anja Kaspersen engages with Raja Chatila, professor emeritus at Sorbonne University, exploring the integration of robotics, AI, and ethics. Chatila delves into his journey in the AI field, starting from his early influences in the late 1970s to his current work on global AI ethics, discussing the evolution of AI technologies, the ethical considerations in deploying these systems, and the importance of designing them skillfully and mindfully.

With a a focus on safety-first approaches over risk-focused frameworks, drawing parallels with other industries like aviation, Chatila advocates for AI systems that are designed to benefit humanity. What are the responsibilities of developers and policymakers to ensure these technologies are developed, tested, and certified with care and consideration for their effects on society?

Beneficial AI AIEI podcast link Beneficial AI AIEI Apple podcast link

ANJA KASPERSEN: Today we are joined by Raja Chatila, professor emeritus at Sorbonne University and a leader in blending robotics, artificial intelligence (AI), and ethics. As the former director of the Laboratory for Analysis and Architecture of Systems at the French National Center for Scientific Research and past president of the Institute of Electrical and Electronic Engineers (IEEE) Robotics and Automation Society, Raja has been a key figure in shaping the AI discourse. His role as chair of the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems and involvement in the Global Partnership on AI underscore his pivotal impact on global discussions on AI and ethics.

Welcome to the podcast, Raja. We are very honored to have you with us.

RAJA CHATILA: Thank you for having me. The honor is mine.

ANJA KASPERSEN: Raja, our collaboration through many years now, especially initiatives under the IEEE, has been instrumental in advancing discussions around the ethical implications of AI, challenging I would say the norms and striving for technologies that are safe and beneficial for humanity, and your dedication to this cause coupled with your remarkable achievements has truly set a standard in the field.

Reflecting back on your own journey, Raja, I am intrigued: What initially drew you to the field of robotics and AI? What and who were your inspirations?

RAJA CHATILA: That is a question that takes me back a very long time. When I was still a student in engineering in the city of Toulouse I met someone whose name is Georges Giralt. He was a researcher in robotics, but at that time—I am speaking about the end of the 1970s—robotics was not as large a field and AI was not as large a field as well. It was a field of research, indeed an important one, but no one was speaking every day about robotics and about AI.

As you know, AI as a research field started in the late 1950s, so this is 20 years after it started, and robotics itself started at about the same time but more on the idea of manufacturing robots that do systematic motions, accurate motions, and fast motions, for example picking, soldering, or painting.

The field of intelligent robotics, which is the convergence of AI and robotics was not as advanced. There were very few projects, and one of the main projects in the world was the project involving Shakey the robot at what was called the Stanford Research Institute at that time.

What led me to this domain is that I was intrigued by what intelligence is: Can a machine be intelligent? Can we have something called actually “artificial intelligence?” That is not a very simple question, and you know all the debates that you have about this even now. The intriguing idea of a robot, a physical machine that can exhibit intelligent capacities as humans do—that is the model of course but we are far from that—was a very challenging one.

Is it even possible? At that time, we did not even have a single robot, a single machine, that was able to move on its own from point A to point B in a previously unknown environment, which means we were at early times. The question was: Here is an object, a machine, it has computers—well, computers at that time were very large. It was the beginning of microprocessors. The very question about how to move in space, avoiding obstacles, discovering obstacles, detecting them, and representing the environment, all these questions were new. That was the challenge, to achieve that. Georges Giralt took me as a PhD student and then I started this in my thesis.

The mystery of intelligence is still here. We did not solve it, of course. Now we have robots able to move, avoiding obstacles, and making models of the environment. This has been achieved to some point. With a lot of work we were in the beginning of this journey. What is important I think is that when you start a challenge that is new, that no one or very few have addressed before, along this path a lot of things have been discovered.

For example, we have a common problem called SLAM (simultaneous localization and mapping) in robotics. This expression, SLAM, came later in the 1990s, but the point is when I started to work on robots that had to build a model of their environment, move, and avoid obstacles, the surprise was that despite all the efforts in this modeling and this path generation and path planning the robot kept frequently hitting obstacles, whereas we had the model of the obstacle. It knew it was there.

The interesting thing also in robotics is that you deal with the real world, and the real world is full of what? Uncertainties. So you build a model, and if it does not take into account uncertainties properly, you are bound to fail. This is a lesson: Always take into account the real world in your models. Try to understand how to cope with the real world. The real world is complex. It is a source of great discoveries if you look carefully and know how to search.

We came up with this idea that in order to move in an environment you need to model the environment and localize yourself—I am speaking of the robot, of course—at the same time in it. We used to say: “Let’s build the model and then localize the robot in it.” No. This has to be done at the same time, which means perception, action, and world modeling have to happen simultaneously.

This was one of the major achievements that many, many researchers took over and developed, which was very active in the 1980s and 1990s and which is now standard. That was very rewarding work because it makes you think about the intricacies of a problem which initially started as a geometry problem and ended as a problem in which you have to deal with probabilities, filtering, information theory, and control, a lot of things that were not profoundly concerning in the beginning, and you discover this as you go.

The point is that also when you go to I would say more complex issues about intelligence, which means how to decide to do something in the world, not just moving, then you discover also that the complexity here grows wider because you need to take into account the issues of how to organize the decision-making process, how to mix perception, action, decision making, and reaction to the evolution of the environment, and this leads us to reflect on how does our brain do this? How do we organize these complex processes so that we can take decisions and understand what is going on?

You see the relationship between just simply moving—any animal can move—is to make sense about the actions, the environment, and reflecting in a more abstract way. The continuity between action, decision making, and abstraction is also a major point. It is still a major point today because when you consider AI systems today, AI systems are based on machine learning which use data to build statistical models and basically are not in general connected with the real world. They take a lot of data and build a statistical model of the data, and then based on this model you can predict and recognize that some new element belongs to a given class or has such or such a feature, but the connection to action can only happen in systems that actually deal with the real world and act in the real world. This is a very different problem.

ANJA KASPERSEN: Thank you for sharing that with us, Raja. It is a nice segue to my next question to you because your insights as you said relate to today and your insights on the current state of AI, particularly in the context of ethical dilemmas posed by advancements like large language models, are of course invaluable. I think you came into this as a young man when the very field was being shaped and some of the computational approaches that we see today actually have their origins back in the 1960s and 1970s. Some of these evolvements—trying to understand the human brain, as you also referred to—especially the younger generation talk about the “ChatGPT moment” or the “brute force moment,” which have sparked debate. I think your perspective can bring I will say much needed nuance to this conversation, reminding us that these developments do not occur in isolation but evolve within a broader social and technical ecosystem.

RAJA CHATILA: Yes, Anja. This enables me to also recall specific episodes.

In the beginning I would say the problem basically was to build a machine that was able to work on its own—we called it “autonomous,” although I know in philosophy the word “autonomous” means a different thing. Autonomous for robotics engineers means that the system after you have designed it works on its own; you do not have to interact with it or help it in a way. Autonomous robots means a robot that can move on its own to accomplish some tasks.

One of the first challenges we met at that time was, for example, sending a robot to Mars, where the robot is by itself and has to move and avoid obstacles and where there is complex terrain. We focused on how to make this robot able to do this. By the end of the 1990s and the early 2000s, we had achieved some performance with such so-called autonomous robots and we had a lot of robots deployed in factories, again manufacturing robots.

A new problem emerged, which was human-robot interaction, the idea that robots could be deployed not in specific environments such as, say, a planet or a factory, but at home in interactions with people, assisting people. Starting with this issue there was an additional problem I would say, an additional complexity. The complexity is: How do you interact with humans? Humans are not just an object you avoid. They are not just something that is there and you might possibly hit them while moving. Humans have to be considered as humans.

The move from a technological system that works on its own in isolation to a technological system that interacts with humans is a moment when you say, “Hey, what am I doing here?” Interaction with a human needs to understand what the human being is, needs to even understand human values, and needs to understand what does it mean to interact with a human. We still don’t know, by the way, but as soon as humans are in the game you have to change your perspective, and safety becomes of the essence. It is not the safety of the system; it is the safety of the human, of course. The safety of the system itself was addressed in one way or another, but the safety of the human means the safety of the task and means understanding human preferences but also understanding how humans behave and considering that you have here a very complex situation in which you cannot just go ahead and develop your system without looking at what is happening in society.

We started asking the question: “Why should we develop robots to interact with humans? Why should we develop robots for elderly people at home? Is this really necessary?” You start to ask questions, and you find some answers or not. You can say, yes, it will be helpful because elderly people cannot accomplish all tasks or move, so let’s do robots.

Is technology the answer to this issue? Shouldn’t we rather consider the problem as a whole, which is a social problem? More complex things come in, and you have also to discuss with people in sociology, philosophy, and healthcare, and this widens your horizon, your perspective, and you start to understand things which are not just the technological issues.

To address your question, it is important to know that when we develop technology it is not done in isolation. It is done to interact with people, to help people, to serve society, and society is composed of human beings. Society is also a complex thing that evolves on its own. Considering human beings includes considering how the environment of human beings is also impacted.

Today of course this issue is a major one with climate change and climate action and the impact of AI on climate, positive or negative by the way at the same time, has also to be considered. In all cases at the core of the issues there is the human being and human society, so it is not just about the excitement of developing a new technology. It is about considering how this technology is—first question: Is it really useful? Is it really something we want? Is this the right solution? Also, when you start developing it, if you answer the question, you have to take from the outset human values into account.

ANJA KASPERSEN: So focus on the why and not just the how?

RAJA CHATILA: I think change of perspective on how you build AI systems, how do you consider—the very approach. You do not start with: “Here, I have a mathematical problem or I have a technological problem, and I am going to solve it.”

No. You are going to say, “I am going to address the actual problems that are inherent to the fact that this technology is going to interact with human beings.” It is a different problem to address.

ANJA KASPERSEN: Is this a problem, or is it an AI problem?

RAJA CHATILA: Absolutely. AI is not the silver bullet that can solve everything. Maybe the first question when we address this is: “Do I need AI to solve this problem?”

Of course, AI is a fantastic tool. You can use AI systems to process data that you have and try to make sense of this data, but this has also other consequences because you have to build a system that correctly does the job, and that is not always the case. Also, there might be other solutions.

For example, getting back to elderly persons and healthcare, isn’t healthcare also something in which other humans have to help humans, and not just technology? You need to consider this whole framework. If you want to use a technology or build a technology, this technology should help helpers and should also help the impacted persons, and you start thinking about who are actually the stakeholders when I develop a technology. There is a whole chain, of course the designers of the system but also the end users, the impacted persons, but also those who are going to operate the system, also those who are going to deploy it, maintain and fix it, and also the persons who surround—the family, for example—a person who is using AI.

You find out that when you develop a technology and deploy it that interacts with humans you have multiple different categories of human beings who are impacted in one way or another by technology by itself because it changes our social fabric, our habits, and our economy. Global thinking is the only way to understand what is going on and to try to understand all these impacts and to develop the technology accordingly and not the other way around, which is: “Okay, I have this fantastic idea. I am going to develop this technique, I am going to put it on the market, and we will see what happens.”

ANJA KASPERSEN: Why was last year this watermark moment almost for starting to have a conversation, the ChatGPT moment, the brute force moment? Was it more compute and more data, or was it something more complex than that?

RAJA CHATILA: It is something that is more complex. It is an evolution and an achievement in the domain of machine learning. One thing that machine learning did very well was basically processing data to predict in what category some new data—as I said earlier, you provide a lot of data to the system, it is going to build a statistical model, and then you input new data, and it is able to tell you which class or which category this new data belongs. It is called prediction. It enables it to better process a lot of problems.

For example, if you ask for a loan, the bank wants to know if you are able to reimburse it at some point, and if you have some features that resemble the category of people who are able, then the loan is accepted, and if not it is refused. This issue on its own could be a topic of discussion, but still it is basically about predictions.

Something that is really important and was resistant to AI was the understanding of natural language, the language we speak. This was one of the first targets of AI. In a Dartmouth College project in 1956 one of the issues was using language, but this was still resistant. You had some systems that were able to translate, for example, but using language and understanding the semantics of the language and outputting text was something that was resistant.

This was made possible with a specific neural net architecture. There are several, but one of them is called Transformers. That was published in 2017 and was known to specialists in the domain of AI but not to the general public. This enabled us to produce systems that were able to interpret natural language text to try to understand the semantics of the text within the context, if I may say.

What happened in 2022 was something which was somewhat unprecedented. When I used to give talks, speaking to the general public or even to some classes, I would start my talk asking the audience, “Did any one of you interact with an AI system?” Only a few hands raised. Some people had some experience because they had seen a robotics exhibition, or some people know that looking for something on the internet using a browser you actually deal with an AI system which prepared the answer for you and ranked the pages or whatever.

Now when I ask the same questions, everyone raises their hand because everyone has interacted with ChatGPT. This is what happened. What happened is that OpenAI made public this system called ChatGPT, which is a chatbot, a system that interacts by chatting with people. It is based on the Transformer architecture, which is called GPT, generative pre-trained transformer. It is pre-trained on a huge amount of data and produces a huge model. This model of text then became multimodal with images and sound if you want, not completely deployed, and today you can prompt it with a request and it will provide you with a lot of text answers that appear to make sense.

This is the new thing. Everyone on the planet was able to interact with an AI system directly, and the AI system would answer in natural language the request. This is new because interacting in natural language with an AI system did not happen before at the level of the wide public. It was something that was done in some places, but not anyone who could discuss that chatbot existed. You had chatbots which you could interact with by asking them very specific questions: Maybe you want to make a reservation or buy a plane ticket, and you can have a chatbot to interact with in a limited domain.

Here you can ask the system anything, and it will tell you anything. That was the moment where people discovered the power of AI, not only people but decision makers and politicians. Everyone discovered the power of AI with this system. It has this capacity to use our language, and our language is very much connected to our reasoning and our humanity, so the attribution, the projection of human capacities on AI, has raised a lot because a lot of people believe that this system really is as intelligent as humans are. It uses language and produces language.

The point is, like other AI systems, despite its fantastic performance, this is based on a statistical system, and this has inherent limitations. Statistics means that you are able to provide correlations, not reasoning, not causality, and the system provides answers based on the correlation between your prompt, what you input into the system, and its immense model.

Sometimes this makes sense and the result is well-constructed. Sometimes the correlations are not correct, so the system output, still very well constructed, includes false information just because it was correlated. Sometimes it includes what are called “hallucinations.” I don’t like this word because it is not really something about hallucinations. It is about correlating events or pieces of text that have been mixed in the model, producing information that does not correspond to reality.

ANJA KASPERSEN: Fabricating connections.

RAJA CHATILA: Exactly. That’s it.

The point is that by fabricating this text, this information, people think it is true because they cannot verify it. Therefore, you have in your output a mixture of true information, something that happened in the real world, and something that didn’t, but it is mixed together and you cannot know which is true and which is not, which means that these systems are able to fabricate another narrative other than what is the truth.

Here we have a profound problem, if I may say a few words about it. When you have a system that you think is reliable and that you think is able to answer any of your questions and it tells you things that you cannot really verify—because you don’t know what to verify, you don’t know in this answer what is true and what is not. You are not even going to think: Well, let me see if this is true or not. How do I know what to verify?

Now the risk is that according to your own interaction with the system you have some information which is different from the information that I may have or another person has. The disinformation mixes real and false information, so we will have different truths. We will not have the same background, the same unique support of truth in our society. You are going to think something different from what I am going to think. This might even be multiplied by dissemination over social media, for example, which is a fantastic way to inflate the information and spread it.

This is where I see a major issue because if we as humanity or even in the same community do not have the same knowledge about what is true and what is not, then there is a danger that we will not be able to understand each other anymore. There is a danger to the fabric of society itself.

What will happen indeed is that we will tend to take the outputs of these systems for granted. Very few people will even make the effort of trying to verify or check, again because the question is, what do I verify? I don’t know where to start.

Also, most people are not aware about these issues. That is another problem. It is the problem of literacy about AI, understanding what are the foundational principles of these systems, what are their inherent limitations? In most technologies we use we do not necessarily understand how they work, but we know something about how to use them in a way, what precautions we should take, what concrete measures we should take to mitigate the negative consequences or even avoid them completely. With every technological object that you buy or use you have a notice that tells you what to not do. For example, if this is not correctly built, you might do something which is not correct.

Still, there is nothing that says how to use an AI system when you are not a specialist. You have this system, which is available. It is not the only one now, of course. You have a lot of systems. I started with ChatGPT, but now there are many others. There is only one single warning about the use of the system that tells the user—if you have interacted with ChatGPT you must know this. By the way, I often ask people, “Did you notice that in the window where you interact with the system there is a small message written at the bottom of the window?” No one noticed anything. On the bottom of the window is written: “ChatGPT can make mistakes. Consider checking important information.”

ANJA KASPERSEN: That is a new feature. That was not there from the outset.

RAJA CHATILA: It wasn’t there, and at some point it was a different message which was more explicit, saying that it can make mistakes about facts, people, and events, something like that, or locations. But now it tells you to consider checking important information. This is very general, but at least it is there. Finally, it means you are at your own risk. If you want to trust the system, just be aware it can make mistakes.

ANJA KASPERSEN: You mentioned trust. Does it surprise you that so much of us, what it means to be human, is embedded in the way we communicate with one another, and that can be so easily reproduced in something that resembles reasoning, although not human reasoning, but a different type of reasoning? Does it surprise you?

RAJA CHATILA: Yes. Frankly I was surprised. I was also impressed by the performance of this technology. Of course you can understand after the fact, and understanding how it works you can understand this, one must recognize that the performance of this technology is really impressive. That is a factor in its success, of course. Before that we had very limited systems in terms of dialogue.

Maybe two things here: One is, getting back to the humans, our capacity, our ability, our proportion, our natural behavior to attribute intelligent capacities to objects, to things that look like they are living or human, even with very simple features, this capacity to make these attributions is immense. This is something that is encoded in our behavior. It is inscribed in our behavior by evolution probably because this capacity to attribute enables better social interaction, but it also enables us to distinguish what is living from what is not living. What is living can be dangerous. We do not want to become prey for it, so it is better to over-attribute in a way so we can better fight against any predator.

Still, our capacities for attribution are very high. We easily project on the system intelligent capacities and human intelligence.

But it is not the only thing. Indeed the text is very well written in general. It makes sense. That was a major accomplishment. One must have the honesty to recognize that it is really something that was not so easy to achieve. It comes from, of course, a lot of research, both in linguistic and machine learning, because actually the interpretation, the semantics, the meaning of words in a text depend very much on the text itself. Every word in the text has a meaning, of course, and the meaning depends on what is before it, what precedes it in the text, sometimes what is after it, so if you want to understand a word in a document in a text, you have to read some length before and maybe some length after and then you better understand what it is about, what this very word means.

That is basically the idea. If you have context and the context is in the text, and if you have enough context, then you can interpret the meaning of words and provide or fabricate text that makes sense as well because it correlates well enough with your input.

This has been made performant. It was made to be able to provide long texts and to process long texts. Note that what I have been saying for a few minutes is that the interpretation is within the text that is used, and the text that is used is this immense model in which the text has been completely transformed into something else. It is vectors actually, a computation of vector products and so on. I am not going to get into the technical details, but the text is transformed to a mathematical representation, a computation is made, and you have the result.

It makes sense because it is made in a very good way, but there is no connection with the real world, to get back to my initial thoughts about the interaction with the real world. The system does not actually embed the semantics, the meaning, which is grounded in the real world. The meaning is only the meaning that is in the text, in the documents. How much these documents connect to the real world—usually the documents connect to the real world, but we humans make sense of this connection because we live in the real world and we know what a given object in the real world is. The system doesn’t.

For example, the system can speak about apples, about carrots, about lions, and about whatever you want in a very well-informed way, but it does not really know what an apple, a lion, or a carrot is. It is just what the frequency of the words in the model says. The phenomenological, real-world experience is not there. Therefore, the system can say things that don’t make sense. We might laugh about it, but then we will continue to use the system nevertheless.

There was that moment because there was this major achievement, some people said a revolution, of deploying an AI system to the general public that gives the illusion of manipulating natural language correctly, providing information and answering questions about anything because the model is so large and so diverse usually that it can do this correlation correctly but sometimes incorrectly and then mixing things together.

ANJA KASPERSEN: Exactly. We are going to jump now to the safety angle. You and I worked together in a few different capacities over the years and now more recently under the auspices of the IEEE, the largest technical professional organization in the world, that pioneered a lot of the work that we now see in the AI and ethics space. They were one of the first entrants into discussing these issues under your chairmanship.

You pushed a new way of thinking about this, a new paradigm, if I may, where you basically called for us to look beyond risks. Can you talk a little bit about this “beyond risk” work and more importantly, if we put safety first as a principle rather than talking about risks first, a safety-first approach, what would that look like?

RAJA CHATILA: That is maybe the core of the issue, the important point. This notion of risk, this notion of safety, is key to the debate. It is quite common to say that very often a technology presents some risks. The classical example is the knife. With a knife you can cut some butter or you can hurt yourself or even kill someone, maybe not exactly with the same knife as a butter knife, but still.

The idea is that technology has frequently an inherent risk, the whole issue is that technology is neutral and it depends on how you use it. If you use it correctly, you will reduce risk or it is not risky at all. If you do not use it correctly, then let’s try to mitigate this risk.

I am not going to spend time discussing this approach about technological neutrality. Technology is not neutral, of course, because it is developed by humans and follows some decisions made by humans, so you can develop a technology in such or such or such direction of course, and there is nothing deterministic about the fact that we will develop such or such technology. Some technologies are much riskier than others, and there is a choice we have about these.

The idea of saying that AI is like any other technology and has some risks, so let’s develop it, use it, and then try to mitigate the risks poses a very important problem because AI is not just a technology like any other. The reason is that it addresses intelligent capacities. It is called “artificial intelligence” for a reason. It makes decisions and influences our behavior, therefore if we ask ChatGPT something we take something from it. It is widespread.

Historically one technology that is specifically risky is aviation, airlines. Aviation is risky, and at the beginning it was not as safe as it is now, but the risk of one model of plane, for example, not all of them at the same time. The use of the technology was limited when it was riskier, so the consequences, even if catastrophic for of course the passengers were limited to those passengers.

What happened is that a lot of effort, research, engineering, technologies, and regulation have been devised to reduce the risks of that technology, aviation, and to use it safely at a level which is very wide, mass level.

Today with AI, AI is used all over the world instantly. Someone releases a model, and its use is all over the planet. It can impact billions of people immediately. There is no process that is deployed to really mitigate those risks, to make sure that the system that you are using is safe. Of course developers developed some processes to mitigate some risks. For example, you have some printers with ChatGPT that are able to limit harmful content. You can go around them, by the way, but still they have made this effort, but this is not completely safe yet.

As I said earlier, using the technology, you do not have a notice to learn how to use it safely. You are going to use it and naturally trust it because it knows a lot, it knows everything, so why shouldn’t I trust it?

We have a situation where you have a technology that is instantly deployed to the masses that presents inherent limitations, and people are not really aware about the limitations. If you take a plane, you know there is a very limited chance that an accident will happen, but you take that risk.

Why do you take this risk? Why do you trust aviation? You trust aviation because you trust not only the manufacturer, which is a human organization, you don’t trust the plane actually, you trust the manufacturer that built the plane, but to trust the manufacturer you need some guarantees, and the guarantees come from what? They usually come from society, society in this case being regulation of the aviation industry so that it follows a development process that is inherently safe. That does not mean that there are no risks at all, but it means that we have reduced the risks to a number that is so low that the risk is acceptable. Indeed the number of aviation accidents is very small. It takes very special circumstances.

This is due to regulation. You have certification processes and you have authorities like the Federal Aviation Authority, for example, in the United States, the International Air Transport Association in the world, and others. You have authorities that impose things. Therefore, you know that there is a human-made system that is there to guarantee to some extent that, okay, you can use this technology with some trust. I don’t trust the plane. I trust this whole process which is in place.

ANJA KASPERSEN: You are touching on a very important point here. I have been a very strong proponent as well of moving away from the notion of trustworthiness when we talk about systems to talk about reliability, dependability, verifiability, pertinence, and all the other issues that will help us navigate the features of these complex systems. This builds on your earlier point, that the focus and responsibility was shifted away from the manufacturer to the consumer. That causes problems when we are then trying to almost instill this acceptance of risk, which I know concerns you, and start instilling concepts like trustworthiness, which is fundamentally contributing to moving that responsibility onto the consumer, who may have varying degrees of literacy or even varying degrees of insight into the systems that they are being exposed to.

I would love to hear your views on what you are seeing in this field. Also, are you concerned that we are very much moving down a pathway right now where we are not learning from the aviation industry but where we are clambering onto this risk framework on everything we do on AI as opposed to going for the more classical approaches in engineering where you have to have a demonstrated safety record where you have those checks and balances in place. What are you seeing in this, and why are we pushing so hard on the risk paradigm, the risk framework, in your view?

RAJA CHATILA: I think that is the core question. Thank you, Anja, for asking it.

Globally the paradigm has changed in a way where instead of saying, “Let’s make sure this technology is safe before launching it, before making it available,” we are in a situation where we just recognize that the technology is there, it exists, so let’s mitigate its negative consequences, as if we were in a situation where there is a natural determinism—I wouldn’t say obligation but almost—to use this technology. It is there, so let’s mitigate the risks. Hence, all these risk-based approaches for governance.

Why is it so? It is so because I think most governments or organizations think that the economic value of this technology is very large, and they do not want to be left behind, so they want to adopt it, develop it, and push its adoption by companies and society because of the economic value. Not only the economic value; it is also a question of global power balance because the capacities of these technologies are so high that you have a question of geopolitics in play.

To get back to the technology itself, the stakes appear to be high, and therefore we want to use it, but we know it has limits, we know it is not really safe, so let’s try to reduce these risks. Therefore we come with risk-based approaches to legislation. We want to impose some codes of conduct which are of course not mandatory. We want to find a way to basically use it at a maximum while trying to after the fact reduce its shortcomings.

Of course this will not work, in my opinion. A proponent of legislation that frames the development and use of AI systems because it should not be an exception—almost every technology is framed by regulation. Almost any technology has some certification process that says it is safe to be used by the public and follows some standards. Sometimes there is certification but not all the time, and the regulation of AI is always met with a lot of pushback.

There is some regulation. There is the European regulation obviously, which has been voted on recently. There is some regulation in China. There is some regulation in the United States at the level of states but also at the federal level, for example the Executive Order by President Biden on October 30 of last year. There is always pushback about regulations, and the reason is the high stakes of this economic value and also the political and geopolitical issues.

This is why we have here a situation which is in a way unprecedented, a technology that reaches everyone at home and work that is in general not so much regulated. Even if the AI Act in Europe has been voted on, it is not applied yet. It will not be applied for some time, six months for systems which are considered prohibited and two years. The situation in my opinion is not the best situation for the interests of society in general.

But we have these regulations. They are better than nothing, but it is about time that we think differently, and we have done this before with the Global Initiative on Ethics of Autonomous and Intelligent Systems, which started in 2016. One of the main results of this work is a document called “Ethically Aligned Design,” which says in three words how we should proceed. We should design the systems based on ethical values and ethical reasoning so that they are aligned with human values, and it is not about risk mitigation but about designing systems that from the outset are safe. It is not about framing risk; it is about developing systems in a way that we know comply with our obligation of doing good, doing no harm, justice, and respecting human autonomy. These are four principles of bioethics that I recall here, but they apply actually. Many ethical proclamations also for AI are inspired by these principles.

The first one, which is being beneficial, is the one that is more neglected in the domain of digital technologies and AI, which we should recall and put forward in this approach. Basically if you start by thinking that I want to build this system that is aligned with human values, you will start by thinking what are those values and who are the impacted persons, the impacted categories, and design your system accordingly.

The way to go is by having this proactive approach so that the system is safe by design. If there is no 100 percent safety, which can be understood sometimes, you know in advance how to frame it so that you can announce the safety. It is not about the accuracy of the system, for example. Accuracy is a measure in AI that is about the quality of the result, but everyone knows who works in AI that this is not a good measure. You have other measures also. The point is that the system can be completely wrong with high accuracy because it does not understand what it is doing; it does not interpret the data.

I think focusing on the risk is the way that has been found to nevertheless address those inherent limitations, those problems that we can find in AI systems about, for example, bias, about the lack of robustness and safety issues, and about these issues related to mixing false and real facts, et cetera, by saying let’s try to nevertheless propose a framework to try to reduce the risks, but actually they are not really reducing the risks, and the reason is very simple.

How do you evaluate risk? Usually technically in safety studies risk is defined by the probability of an event to occur multiplied by the magnitude of harm of this event. If it is very probable and the harm is high, it is a very, very high risk. If it is not so probable but the risk is high, the risk is reduced. If it is very probable but the risk is low, of course you can say it is acceptable.

Here the problem I would say is that the factors of this computation are not well defined. How do you define in a number the risk to democracy when you have misinformation that is produced by an AI system and disseminated all over the planet? Please define risk. Please define the probability of harm in this case. I am not sure we can do that.

You can define the risk of someone misusing a knife because you have a very specific description of the situation, but defining such risks when it gets to risks to human rights and risks to democracy, that is very difficult, and how do you mitigate these risks, how do you compute the probability, how do you compute the harm, and how do you come up with mitigation measures? The answer is not so clear, and the regulation tries to actually say, “Well, if we consider that the risk is very, very high in terms of, for example, to democracy, then let’s just ban this application,” but there are ways to get around this banning.

Or, “Let’s certify it.” How do you certify it? All of its processes are not well defined yet actually. How do you test it, especially when it comes to risks to human rights and human values? You might come up with ways for very specific technical applications, for example, if you reduce the framework, if you reduce the situation, but computing the risk in general and adding the intensity of harm when it relates to human rights is a very tough problem.

This is why I see that this approach, which is necessary in this situation that we are living in, also has inherent limitations. This is why we need to change our perspective and have this approach of thinking about safety first, safety by design, human values first, ethically aligned design, value-based design, and thinking how we can make this possible.

ANJA KASPERSEN: Based on your experience, Raja, because you have been involved in developing standards in this space for a very long time—you alluded to standards earlier—why in your view are standards so instrumental as a tool to bring about this, if I may, “normative” framework. You spoke about framework earlier. Why are standards a helpful tool for us?

RAJA CHATILA: Because standards are norms. If you, as a manufacturer or designer of a system, adopt a standard and put it to work in your system, it means that you have adopted a transparent—because a standard is known—process which is defined in this standard by what is your approach to design such or such a system.

This is why standards are important. It is not just because you have interoperability between systems but because you can verify, you can have a certification process, you can have a verification process, and you can have an audit process that, yes, your system complies with such-and-such design approach and such-and-such standard.

If we are able—and we did it in the IEEE Global Initiative; we developed several standards like this—to develop standards which I call “techno-ethical standards, it is about technology but it is grounded in ethical considerations and values, then manufacturers of those systems can say, “Well, I’m going to adopt standard such and such, and the standard is known, so you have transparency, the standard has been developed over several years with stakeholders so that it is compliant with this idea of ethical design or safety by design, and therefore this will enable you to build a technology that can be more trustworthy. We have spoken about trustworthiness, and this is the process because you acted in a transparent way with a norm that defines how to develop a trustworthiness in the system and you have a whole process that defines the standard, that can audit your adoption of it, can certify it possibly, and this is what we need.

ANJA KASPERSEN: A trustworthy process and a reliable system.

RAJA CHATILA: Exactly. The standard is not necessarily imposed, but it might be imposed by some legislation. Even if it is not imposed, even if it is a choice to use it, people can know that you have adopted this standard and trust better your approach.

Some people mistakenly think that we are going to standardize human rights when we speak about a techno-ethical standard. No, it is not about defining in a very procedural manner what are human rights. It is defining how to respect them. It is defining how to build a system in which every measure is taken so that those values or those human rights are not put in danger.

ANJA KASPERSEN: It allows you to test, validate, and verify the system against those standards that you are trying to embed.


ANJA KASPERSEN: As a last point, Raja, if you were to give our listeners one piece of advice on how to navigate this immense complexity with AI being pretty much in headlines around the world on a daily basis, what advice would you give, having been in this field for a long time?

RAJA CHATILA: I would say that we have to go beyond the excitement of the new technologies that appear to be very impressive and have very high performance. Don’t be blinded by this capacity. Consider that every technology has to be at the service of humanity, for the benefit of humanity, for the benefit of every human being, and this is what we should put foremost. It is not about being a techno-skeptic. It is not at all that. It is not about being completely impressed by the performance of technology. It is about being a human being. It is about thinking about our human values first.

Think about this first: What do you want for your children? What do you want for the people you love? You want their happiness. How do you ensure this? You ensure this by using a technology that can help them but that doesn’t put them at risk.

ANJA KASPERSEN: What a great note to end on. Thank you so much, Raja, for taking the time to speak with us and our listeners.

RAJA CHATILA: Thank you very much again.

ANJA KASPERSEN: To our listeners, as always thank you for joining us in this insightful exploration. Stay connected for more thought-provoking discussions on ethics and international affairs. I am Anja Kaspersen, and it has been an honor to host this dialogue. Thank you to the team at Carnegie Council producing this podcast and to all of our listeners for the privilege of your time. Thank you.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

MAY 9, 2024 Podcast

The State of AI Safety in China, with Kwan Yee Ng & Brian Tse

In this "AIEI" podcast, Carnegie-Uehiro Fellow Wendell Wallach speaks with Concordia AI's Kwan Yee Ng & Brian Tse about coordinating emerging tech governance across the world.

APR 30, 2024 Podcast

Is AI Just an Artifact? with Joanna Bryson

In this episode, host Anja Kaspersen is joined by Hertie School's Joanna Bryson to discuss the intersection of computational, cognitive, and behavioral sciences, and AI.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.