Behind AI Decision-Making, with Francesca Rossi

January 23, 2020

Replica of Jeopardy! stage with IBM's Watson, Mountain View, CA. CREDIT: Atomic Taco (CC)

ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.

Today, I'm here with Francesca Rossi. Francesca is the IBM AI Ethics Global Leader and a distinguished research staff member at IBM Research. She has published over 200 scientific articles in journals, in conference proceedings, and as book chapters. She is also the general chair of the 2020 AAAI Conference on Artificial Intelligence in New York, which is a conference happening February 7–12.

Francesca, thank you so much for coming today.

FRANCESCA ROSSI: Thank you.

ALEX WOODSON: We started following AI closely at Carnegie Council about five years ago. We had a talk back then with Wendell Wallach. What has changed since then? What are some of the technological developments for someone who hasn't been following this day to day?

FRANCESCA ROSSI: AI changes very rapidly. The science and the technology change very rapidly, and the applications. In the last five years I think we have seen a lot more applications of AI techniques. A lot of these applications are using a sub-area of AI, which is called machine learning or deep learning, that tries to gather information from data collected and to understand what the data says about how to predict what will happen in the future. That gives a wide range of applications, so we have seen AI being applied in most of the things that we do in our everyday life.

Scientifically, there is a lot of effort by many researchers that tries to go beyond what deep learning and machine learning can provide for the existing applications because even though there are so many successful applications, there are some limitations of these data-driven approaches that we still are trying to understand how to overcome. For example, to really learn how to solve a task with high accuracy and flexibility we need to have a lot of data, a lot of training examples, to give to an AI system. In some cases maybe there are not many examples to give. We think it's kind of primitive that we need to have so many examples in order to learn how to solve a task. So people are trying to make these learning capabilities more efficient, to learn from fewer examples, just like we learn.

Of course, as human beings we learn how to solve a new task. We don't need millions of examples on how to solve it because we can rely on all the knowledge that we gathered from solving other tasks. That's the idea, to try to inject this into AI systems, to make AI capabilities broader, that can transfer information and knowledge from one task to another one so that learning is more advanced and more efficient.

ALEX WOODSON: Just to make this a little bit more concrete, you said that AI is used in many everyday applications. What are some of the recent developments that we might be using day to day that we might not even know that AI is powering?

FRANCESCA ROSSI: All the apps that we use on our telephones, all the social media that we use everyday, all the data that we provide on social media—the text, the likes, the pictures, the videos—are then used to train an AI system that, in the case of social media, can better predict what we might like so they can give us ads that are personalized because that's the business model of social media platforms.

Every time we use a credit card there is an AI system that checks whether that transaction may be suspicious or not, for fraud. Every time we use a Global Positioning System to navigate, that is also supported by an AI algorithm that is in every AI textbook that basically tries to find in a very short time the optimal route from one place to another one, given the maps and given the information about traffic or whatever. Really, from the time we wake up to the time that we go back to sleep, we always use AI.

ALEX WOODSON: As your job title alludes to and our organization title alludes to, we're very interested in ethics, and ethics when it comes to AI is something that again we have been focusing on. Just to get everyone on the same page, what does it mean for AI to be ethical? Maybe another way to look at it is, what could be some of the consequences of unethical AI?

FRANCESCA ROSSI: I usually don't use "ethical" as an adjective for AI because I think that it is not the technology that is or is not ethical, but I see AI ethics as a multidisciplinary, multi-stakeholder field of study that puts together experts of AI but also experts in many other disciplines like philosophers, psychologists, sociologists, economists, and policymakers, so it has to be multi-stakeholder to understand really what are the best practices, the best processes to put in place, and the best governance to put in place so that AI is as beneficial as possible for the widest part of the population.

This has to do with many different lines of work. One line is what properties we want the technology to have once it is deployed. Typical examples are that we want the technology to be fair when making decisions or recommending decisions to some human being, so we want these decision-making capabilities that we inject into the technology to make sure that it doesn't discriminate. So one thing is fairness.

Fairness is just one of the values that we want to inject into the technology, but there could be other values. In general, the question is, how do we make this technology, once it is deployed, value-aligned to our values? How do we make sure that it respects the values that we want? Fairness, of course, is one very important one.

Another one is, how do we make sure that this technology, whatever decision it makes, can explain why that decision was made? This maybe is not that important in some applications like, I don't know, when we receive online some recommendation for some books to buy. We don't necessarily want an explanation why it is making that recommendation instead of another one. But when the technology is used to make recommendations to a doctor about which therapy to use for a patient, in the judicial system, or in the public sector, or whatever, then it is important and sometimes it is also legally required that you get an explanation and you can trace back to what the factors are that influenced that decision. So explainability is very important.

Then, there are other properties, like robustness or even some form of transparency about what has been done to develop the technology. In general, these are properties of the technology.

Then, there are also other lines in AI ethics that have to do with the fact, for example, that most of these successful applications, as we said before, are based on data-driven approaches. Data is a fundamental ingredient of AI. Since there is so much data that is used to train these AI systems—and a lot of this data is personal data from people—then there are a lot of other issues about what is being done with that data. Who owns my personal data? Do I still own it once I give it to the AI system? What is being done with that? Who is it passed to? All the issues about the data policies are very important. Of course, in some parts of the world, like in Europe, there are regulations that are very strict about how data can be used, but in general in other parts of the world it's not like that.

Then, there are issues about how this technology, once it is so pervasive in a society, impacts society and jobs but also other parts of society—how we interact with each other, how we interact with the technology, and how our children are growing up in this environment where AI is so pervasive.

Finally, there are also within AI ethics more long-term concerns about what is going to happen when the technology capabilities are going to be so advanced that we may lose control of these technologies. How do we make sure that we still are in control while the technology is advancing and is more capable of doing things, even with superhuman capabilities, which are already present but in very narrow domains? Once the technology becomes so capable but even in broader domains, then how do we make sure that we really stay in control? This is tied to the first thing I said about value alignment, meaning, how do we make sure that we advance the capabilities of AI while still maintaining this value alignment between AI and our values?

ALEX WOODSON: I think this leads into what I want to speak about next, which is that some of your recent work has incorporated the work of Daniel Kahneman. He's a psychologist and economist and the author of Thinking, Fast and Slow. He's known for the "System 1 and System 2" theory. Can you explain this theory and how you're using it in your work on AI?

FRANCESCA ROSSI: This also ties to what I said before about recent efforts in the research community. AI, over the 60-plus years that it has been going on as a research discipline, has always had these two parallel paths in research: One is the data-driven approaches that are so successful right now, and one is the so-called "reasoning and symbolic" approaches, where basically the human logical capabilities are used to define a way to solve a problem and then to code it into a machine.

The two approaches have basically been going on in parallel, but now people are trying to understand that to overcome the limitations of the data-driven approaches—which are so successful but we need them to be more capable and more general and more flexible—it seems to be a good idea to combine these two approaches. In the System 1 and System 2 theory of Daniel Kahneman, not completely but in some senses, one can see an analogy between the System 1 capabilities—which are our intuitive capabilities, the things that we do very fast without even thinking about it—are analogous to the data-driven approaches of AI, the machine learning kind of approaches, while the System 2 capabilities of human beings that we use when there is a task that is so difficult for us that we need to reason about the best way to solve it, are more similar and analogous to the reasoning-and-logic-based capabilities of AI.

The analogy is not 100 percent exact because, for example, machine learning for now cannot handle causality while our System 1 can, but the idea that I have in this project is really to take inspiration from this theory of how we think and how we make decisions combining our System 1 and System 2 capabilities and making them connect with each other and call each other when needed to really understand how to build the right combination of machine learning and reasoning capabilities in an AI system, of course knowing that the machine has different capabilities and limitations than the human brain—it doesn't have the same limitations, pros and cons—and knowing that building a machine is not the same as building our own brain and decision capabilities, but I am hopeful that that can give a good inspiration on how we can do that.

ALEX WOODSON: What will this look like? What would this AI system look like that uses System 1 and System 2 learning? What are you hoping to achieve as something that we can see and possibly use?

FRANCESCA ROSSI: The initial goal is to understand really how that theory of decision-making in humans can help us define a very high level theory and architecture and framework for AI systems in general. Then, of course, we can try to associate that into some specific applications, for example, applications that have to do with natural language interpretation or image interpretation, applications that need, in order to advance significantly, to overcome the current limitations of machine learning and data-driven applications while still needing that data-driven approach. We have understood over the years that the reasoning approach alone of AI has a lot of limitations as well because it cannot really gather information and knowledge from the data.

On the other hand, a machine learning approach can do that, but it cannot derive information that can be, for example, used for causality. Causality is also something that is definitely needed, for example, if we want explainability because in some sense explainability is like being able to trace back to the cause of some effect. This can be done usually in a symbolic and reasoning and logic-based approach to AI, but it is not easy to do in a data-driven approach.

So the idea is to really understand in the most simplistic way—which, however, I don't think is the final answer—what are these two sets of capabilities, System 1 and System 2, in a machine and what they should pass to each other in terms of information and knowledge, how they should call each other. Of course, this is a very simplified vision, and there are also some associated approaches that show that probably the best way is that each capability is a mix of reasoning and learning approaches in AI.

ALEX WOODSON: I know you're also working toward building systems that are trustworthy, that are unbiased, and that respect fairness and privacy. Do you see these projects as the same or connected? Is the goal to one day have an AI system recognize, Oh, I'm being biased, I need to change this, I need to tweak this algorithm? Is that the ultimate goal?

FRANCESCA ROSSI: Yes, that's the ultimate goal as well. Something that I'm also doing—and I think that these two lines of work are going to be connected at some point—is to understand how to define our own way of making moral judgments and understanding how to inject that into a machine decision-making capability.

For example, we have been looking at different ethical theories, like deontology, consequentialism, contractualism, and so on, because of course there is no human being, I think, that is only deontological or only consequentialist and so on, so we switch between these different theories. Sometimes we think about the consequences of an action, and sometimes instead we have very simple deontological rules and we follow those rules, but sometimes in some contexts we decide that those rules can be violated because of some consequentialist or contractualist approach.

We gather a lot of data from people to understand how we switch between these different theories in decision making in various contexts, and then we try to understand how to model this switching between the theories into a machine so that either the machine can perform moral judgment in a similar way as humans. But more than that, so that the machine can understand how we do the switching and make moral judgments so that that machine, whatever its goal is, can work with us, understanding better how we think and how we make moral decisions about any scenario.

Of course, the overall goal, if you look at it a little bit more abstractly, is to build machines that help us make better decisions, help us recognize as you say when we are biased or not, help us recognize what's the best decision to make in whatever we need to do. So it's not really the goal to build machines that then go by themselves autonomously to do things but that can work with us in the easiest and simplest way, that can interact with us. That's why there is so much emphasis on natural language-based approaches in AI, because we want machines and humans really to communicate in a way that is very natural for us. We don't want to adapt to a machine; we want the machine to adapt to us. So, question-and-answering systems, chatbots, but also dialogue systems, and that's really a big challenge because so far we really have not understood very well how to build dialogue systems. To build dialogue we need a machine to have a lot of common sense reasoning that still we don't know how to embed into a machine.

ALEX WOODSON: We have been speaking about ethics in regard to AI and these different philosophical theories and thoughts, but ethics are different in different societies, different countries. Different groups have different thoughts when it comes to philosophy. How do you think about that when you're building these systems, when you're working on making these systems act ethically? How do you think about incorporating different societies, different human experiences, into that?

FRANCESCA ROSSI: Of course, one can focus on a specific application and a specific context and scenario and then understand what are the right values for that. But it's also true that even without AI—the legal system is different in different parts of the world. Of course, the legal system is informed by the values and the ethical considerations of a certain culture. So different cultures have different laws about how life should work. It's not surprising that also for AI you might have different values in different parts of the world.

That's not something that is new for AI. We still work in environments where we deploy applications in various parts of the world with different legal systems and different regulations.

Of course, ethics goes beyond usually what is required by a regulation, and these can be context-based and culture-based. That's why all these considerations and all this research and technological efforts about AI ethics needs to be done in a very multi-stakeholder approach, but not just multi-stakeholder in a certain part of the world but also multicultural to understand the differences and the commonalities as well.

As we mentioned before, Europe has a very strict regulation about data policies. Other parts of the world do not have the same regulation, but still there are data policies that companies can decide, and then in whatever they deploy—for example, IBM has its own data policy. Of course, the data policy depends also on the business value of the various companies.

For example, for IBM, we are a company that deploys AI to other companies to help them do better whatever they need to do, and our data policy is that the data that we use from our clients to help them and to build AI models for them belongs to our clients, so we are not going to reuse it for another client, for another application, and so on. That, of course, is embedded into our values and also our business model and the way we work with our clients. Again, there can be different approaches because of the culture but also because of the different ways that companies operate and whoever deploys AI operates.

ALEX WOODSON: I would like to ask a little bit more theoretical question. This is something that I have posed in different ways to other AI researchers on my podcasts, and it's something that you touched on, which is "common sense." I have gotten a few different answers on this: Do you think AI will ever have common sense? Is that maybe a goal of your work? You have also said that we're still trying to understand how humans make decisions. With that in mind, can we understand how AI would make decisions and make the decisions on their own?

FRANCESCA ROSSI: The ultimate goal, yes. I don't have specific answers to that right now. I'm just trying to understand—of course, there is a research area that works on common sense reasoning for AI. Again, from my point of view, I still need to understand how to combine our reasoning capabilities with our intuitive kind of capabilities because, to me, understanding how to combine these two things would lead possibly to understanding how we get to this huge amount of common sense information and knowledge that we use in everything we do and that allows us to learn efficiently a new thing, to interact with other people that we have never met, without having to specify everything about how the world works.

Maybe this can be done in a very primitive way by injecting a lot of data of how the world works into an AI system, but I think the ultimate goal should be to really do it in an efficient way just like we do. Yes, it's the ultimate goal, but I'm still trying to understand how to build an overall framework where then common sense can be injected.

ALEX WOODSON: As we mentioned at the beginning, you're the general chair of AAAI 2020 in New York that is happening February 7–12. What is this conference exactly? What should we be looking for?

FRANCESCA ROSSI: AAAI is an annual conference. It is one of the largest general AI conferences, meaning it does not focus on a specific sub-discipline of AI, but it covers all the disciplines of AI. That is the strength of that conference because all the disciplines are represented there—the machine learning kind of approaches, but also the reasoning, the logic-based approaches, the symbolic approaches, the planning, the scheduling, and so on.

Usually for this conference, we receive papers that are written by researchers, and then you select some of them. We received about 10,000 submissions of papers. We accepted less than 20 percent. We had about 6,000 people reading all these papers because for each paper you need three reviews, three opinions, and so on, so we got to the end with less than 20 percent accepted. All these papers that are accepted are going to be presented by their researchers.

Besides that, we have a lot of workshops, which are more informal ways to discuss about specific topics. I think we have 23 workshops. We also have more than 20 tutorials. Tutorials are half-day educational presentations. People go to them to know more about what is going on in a certain area of AI.

Then, we have a lot of invited speakers at our plenary. Of course, all these other paper presentations are in parallel sessions because we have eight or ten parallel sessions at every single point, but then we have invited speakers. We have the three most recent Turing Award recipients that won the Turing Award in 2019 for their pioneering work on deep learning, so we will hear from them. We have other invited talks about robotics, about the impact of AI on society as well, and many others. We have also a debate about short-term versus long-term approaches to AI, application-oriented or theoretical-oriented approaches to do research in AI.

Also, we have an interesting panel on the history of AI, which is focused on the use of games to advance AI. You may know that over the decades AI has been advanced also by setting some challenges that people and researchers try to address, and many of these challenges were focused on games like chess or backgammon or go or poker, or even soccer. We will have representatives of people who have been leading these challenges. These are leading researchers in AI who have addressed these challenges, so they will tell us why they chose a specific challenge, why that game, and whether that really advanced AI and how it advanced AI, and what are the lessons learned from these games. Garry Kasparov will be there as well to give the human side of playing games, particularly playing games against machines. In his case, he had the famous 1996-1997 chess games against the IBM Deep Blue.

Of course, we understand why we have been choosing games, because games are much easier than real life. They have very clear rules. But it's important to understand what we learn from even resolving a challenge about a game that can be then used in real life, which we all agree is much, much more complex than a game.

ALEX WOODSON: That sounds like a lot to think about there. Great.

Francesca Rossi, thank you so much. This has been really interesting.

FRANCESCA ROSSI: Thank you.

blog comments powered by Disqus

Read MoreRead Less