Can You Code Empathy? with Pascale Fung

Mar 29, 2022

In this riveting and wide-ranging conversation, Senior Fellow Anja Kaspersen is joined by HKUST's Professor Pascale Fung to discuss the symbiotic relationship between science fiction and innovation and the importance of re-envisioning ethics in AI research. We may be able to code machines to seem and act more like humans, says Professor Fung, however the ability to question our own existence to understand who we are, are fundamentally human features and cannot be easily or even responsibly encoded.

ANJA KASPERSEN: Today I am very pleased to be joined by Pascale Fung. Pascale is a professor in the Department of Electronic and Computer Engineering and Department of Computer Science and Engineering at The Hong Kong University of Science and Technology. She is known globally for her pioneering work on conversational artificial intelligence (AI), computational linguistics, and was one of the earliest proponents of statistical and machine-learning approaches for natural language processing (NLP). She is now leading groundbreaking research on how to build intelligent systems that can understand and empathize with humans.

A huge welcome to you, Pascale. I have really been looking forward to this conversation with you. Your professional accolades are many, most of which we will touch on during our conversation. However, for our listeners to get to know you a bit better, I would like us to go back to your upbringing during what I understand to be a very tenuous political period in China.

PASCALE FUNG: Yes, indeed.

I was born, spent my childhood, and grew up in Shanghai in China. I was interested in magic when I was a little child. One day I got a book from my father, I think. He showed me a book about the physics behind magic and it made me realize that there was an explanation for everything magical, and that was really fascinating. It was the first time I think I encountered physics, and I was very interested in all kinds of encyclopedias and so on.

Then, around the age of seven or eight I read a science fiction book about the future where robots would be helping us, there would be no longer flu, and there would be shopping from home. It was very eye-opening, and I was so inspired that I decided I could not wait for this future to come and that I wanted to build robots. That was the beginning of my interest in robots and later on artificial intelligence and the technology that is associated with this, which is computer science.

These three things we see today. It is pretty interesting that we do spend a lot of time online from home shopping and otherwise, and we do have robots now, but we still have flu and, more than that, in the last two years we have seen that humanity still has not quite conquered the so-called "flu."

ANJA KASPERSEN: I learned in earlier conversations that we had, Pascale, that you were quite competitive academically and in some ways always motivated to break new ground.

PASCALE FUNG: My parents are both artists and they have very open spirits. For Chinese parents that is quite something because they did not actually tell me what I should be doing, and they also gave me books to read, or if I had a question, they would say, "You can explore the answers," so they gave me a lot of science books and encyclopedias and those kinds of books.

When I was a child around eight or nine I overheard a conversation between my uncle, who was an engineer, and my mother, who was an artist. In this conversation he was telling her that I was a brilliant little girl and I was doing very well in school in every subject, but never would I ever be as good as the boys in mathematics. That caught my attention. I was like, "Why would that be so difficult?"

Then I took a more acute interest. I was interested in everything, but then I took a more acute interest in mathematics and joined a mathematics club and Math Olympiad competitions, so I discovered that I actually quite liked mathematics and I was quite good at it. That was the first time.

Along the way, as I grew up, obviously as a woman, when I wanted to go to engineering college, I remember my male cousins told my father not to waste money on a daughter to go to college and support me to do engineering. My father was furious about these kinds of comments, and he was like, "We will support you fully, no matter what."

Later on, when I wanted to do a Ph.D., again very friendly advice from friends of my mother saying that "for a girl to do a Ph.D. she is going to age so early, so soon, she's never going to be able to find a husband."

So all these things, all these comments, made me want to pursue my dream even more. I just wanted to prove people wrong, that it was not impossible for me to do these things. Of course, as the child of two artists, when I told people I wanted to do robots and I wanted to do engineering, I was met with a lot of disbelief, let's say.

All that kind of naysaying just became fuel to my motivation to pursue and realize my dream, and here I am today. I feel very privileged to have had all these opportunities to do a Ph.D. with brilliant people in the United States. I went to Bell Labs, which was my dream as a teenager, and worked with some of the most illustrious researchers in computer science and electrical engineering at Bell Labs. Since then I have had the very good fortune to always continue to pursue what I wanted to pursue since I was a child.

And then, maybe less than a decade ago, the field of artificial intelligence actually exploded and things I could only dream of making happen actually happened. The performance of conversational AI systems is approaching human-level fluency. We do indeed have robots now. I am collaborating with robotic companies to make robots in shopping malls as receptionists, synthetic humans who can talk to you and become your companion or a receptionist, and last but not least robots and virtual assistants that help the elderly and people who need help for their mental health.

Recently, this last year, we actually built a system, which we call the Virtual Quarantine Companion, in light of the fact that so many people coming back to Hong Kong had to stay in quarantine for two weeks in a hotel room. People going to China had to do that, and I went through that myself two summers ago, so we built this virtual companion for people in quarantine to talk to them, to guide them through some daily exercise and meditation, to chat with them, gauge their mental well-being and their emotional state, and to help them basically to self-care.

We also collaborated with the World Health Organization on their initiative Epidemic Intelligence from Open Sources (EIOS) to provide a question-answering engine for experts to ask questions related to Covid-19 and vaccine research, and that was very important two years ago. We play a small part, as do many other researchers in AI, to help with the research on vaccines. I feel very fortunate to have the opportunity to do this kind of work.

ANJA KASPERSEN: You are known for your work on linguistic models and machine learning. To demonstrate your passion for languages I actually learned that you speak seven languages yourself fluently, which is an impressive feat.

I would like to ask you to explain to our listeners, or rather clarify, what your work entails and maybe add some historical context. What is conversational AI? What are its applications? Why and how does linguistics play into our ability to recognize human emotions and sentiments, especially I would imagine more complex emotions, such as humor, sarcasm, irony, and even deception?

PASCALE FUNG: I would say the history of conversational AI started maybe in the 1960s with the first expert system rule-based chatbot called ELIZA. For many computer science students, our first homework assignment in AI class is usually to write a rule-based ELIZA that basically talks and chats with people as you type. It is a very simple kind of chatbot.

Later on I would say different milestones happened. I remember I got into the field of conversational AI by way of speech processing in 1988. At the time we were doing speech recognition of phonemes only, very elementary kinds of speech recognition.

In the late 1980s and early 1990s I participated in programs that were funded by the Defense Advanced Research Projects Agency (DARPA) and the Department of Defense (DoD) in the United States. I worked for a company called BBN Systems and Technologies and we built systems that allowed fighter pilots to use voice commands to use weapons while they flew the plane. At the time we were not very acutely aware of any ethical implications because it just seemed very elementary, a very basic kind of application, and what we did not feel very real and impactful.

DARPA continued to fund the project. The first-generation DARPA-funded conversational AI project was called Communicator. Different groups from BBN, from AT&T (Bell Labs at the time), and some other universities got grants to work on pushing the envelope of conversational AI. I think the first prototype was a system that would book train tickets for you when you call in, so it would ask you about how many passengers, what is the origin, what is the destination, and so on as you speak. That was in the early 1990s.

In the late 1990s we saw actually the first generation of commercial conversational AI systems. I don't know if the listeners remember, but we had IBM ViaVoice, which was the first commercial speech recognition and transcription software that you could use on computers. And then in the late 1990s, I think it was around 1998 or 1999, there were two companies, Nuance and SpeechWorks, that commercialized conversational AI for call centers, where, if your listeners remember, in the late 1990s if you called any helpline you would be answered by a computer asking how it could help you and so on. I remember there were even Saturday Night Live skits making fun of computer agents at the time because the computer voice was not very natural. That it did a lot of the jobs of call centers since the late 1990s.

Moving on, I participated in that generation of conversational engines. I actually worked on a startup that worked with China Telecom in the late 1990s/early 2000s to use conversational AI for call center services. It proved to be too early for China at the time, but then the technology kept improving. People working on research in conversation AI kept working on it.

Another milestone came about the time IBM's Watson participated in Jeopardy! and beat the human champions, and there was a lot of question-answering in that system with a computer voice. That is a different generation of a conversational system that had actually the ability to compete, to play a game, to strategize, and to answer questions. Jeopardy! formulated a questions and Watson gave an answer. That was a huge milestone in the area of conversational AI, when IBM's Watson beat the human champions, and it took them I think only five years to reach their goal at IBM.

After that, another milestone was Apple Siri. I think it was the iPhone 4 when they first launched the voice assistant Siri in the smartphone, and that's when conversational AI became pervasive. Since then, every smartphone has a voice assistant, it has become commonplace.

More recently, after that came the generation of smart speakers that had conversational agents and conversational AI, helping you, answering your questions, doing shopping for you, and so on. This is conversational AI.

We continue to make these systems more empathetic. One of our main focuses since a few years ago is to make these conversational agents understand not only what you say but how you say it, because human-to-human communications is based on not just the content of what we say but also the emotion. We convey a lot of meaning through emotion. So, if you change the tone of your voice, then the meaning of your sentence can change as well. For example, very simply, "How are you?" versus "How are you?" versus "How are you?" tells you whether I'm happy to see you or I'm not happy to see you or I'm indifferent. That is all very important.

Today most commercial conversational AI systems have this component of empathy because the systems need to engage the user and need to help users better. Without understanding user emotion and sentiment this cannot be done to our satisfaction. All systems have empathy modules today and we continue to make the conversational AI systems better.

There are also two kinds of conversational AI systems. One is with a computer voice and with the others you just type like a chatbot. These are all considered conversational AI systems.

And then, within these systems some will help you complete a task, and those are called "task-oriented systems." Others just aim to chat with you, engage you, and make you happy as a companion, and those what we call "open-domain chatbots." Moving forward, a lot of systems are hybrid and have both the functions of helping you with some tasks and chatting with you to engage you.

That is the current state of conversational AI system applications, so there are many, many different applications in different areas.

ANJA KASPERSEN: A lot of your work obviously has been focused on building these intelligent systems to understand and empathize with humans that you just spoke to, a feature which is even difficult between humans, as it can be difficult for us to understand the different meanings we attach to what we say. The author and historian Yuval Harari noted some years ago in one of his books that with tens of thousands of millions of data points the algorithm knows us better than we know ourselves. This was a way for him to caution against training algorithms to try to understand us in this way.

Is there a difference in your view between knowing and understanding when talking about machine intelligence or coded empathy, and can understanding and empathy, which require at least a basic level of compassion, be coded in a meaningful way? I would imagine the ethical issues and related tensions are quite profound in taking this technology forward.

PASCALE FUNG: Yes, you are absolutely right. Actually, you pointed out a very fundamental difference between machines and humans, which is that we can code empathy modules, we can code emotional recognition and empathy response into computers, into its abilities.

But we cannot encode compassion, which has an element of intention. Humans have empathy when we are compassionate. We want to have empathy. Computers, just like when we build machines to vacuum clean or we build machines to play music for you and so on, we build the functionality, the ability to perform these functionalities, into machines, but we cannot build the kind of intentions that humans have.

The implication is actually quite profound. When you do not encode intention—because we don't know how—we are not encoding any intention into machines to be empathetic. We are just encoding the function of being empathetic, which means that we cannot generalize from a machine that seems to be responding empathetically to you to generalize and to think that this machine can form an attachment to you. Machines do not attach to you. They are able to empathize with you and perform tasks for you, but it doesn't mean that they therefore will attach to you. When humans have empathy for each other, we do get attached to each other, and that is a fundamental difference between machines and humans.

Ethical implications can be many. You are very right in pointing this out, and we must also think about these implications. For example, if we provide a robot companion to an elderly patient, say, to keep the elderly patient entertained and to have mental health exercise daily, and because such companions can be empathetic—as I mentioned, they can be programmed to be empathetic—the elderly of course will feel more reassured and more engaged with the robot.

But there is also the danger of the elderly getting too attached to the machine. I can give you an example. Twenty years ago Sony invented Aibo, which was the first robotic pet. When Sony tried to discontinue these robotic pets they were met with a lot of complaints and pleas from elderly people in Japan who felt very attached to their robotic dogs, therefore, they begged Sony not to discontinue the product maintenance and support. Sony then made a new generations of robotic dogs for these elderly people.

So there is an upside and a downside of having empathetic machines. You are absolutely right. Therefore, I think today as AI researchers and engineers we can no longer afford to be oblivious to the ethical and societal impacts of what we do.

I think back to the days when I was building the speech-recognition system for fighter pilots. We were not very concerned about its applications or ethical implications at the time, but today we are very aware of it. In fact, in every conference in AI today there is an ethics committee, there is ethical review of your papers, and researchers are being asked to make ethical statements of the implications of their work. We might not know accurately what the implication is going to be, but we need to think of it, and we need to be mindful of what we do.

ANJA KASPERSEN: It is very interesting what you just said. There certainly has been a lot of focus on deep learning, and I have heard you state on a few occasions that deep learning is merely a new term for something that has been around for a long time.

There is a need, I think, to focus on the various approaches in machine learning, which is too often being used interchangeably, yet it is important to understand the difference between them because what approach you choose is very important to what solution or to what task you are actually deploying it towards, what outcome you expect to have, and also as a means of assessing its impact.

In trying to wrap our minds around this it strikes me that it would also be important to address what we mean by the concept of understanding. The late Thích Nhất Hạnh the Zen Buddhist monk who passed away early this year, had a very interesting notion of this concept of understanding. He speaks about understanding as a means to throw away our knowledge, to throw away our preconceived ideas about other, and often when we don't understand the other, when we are not able to empathize, to show compassion, even if you make a distinction between empathy and compassion, it originates from not knowing ourselves well enough, and through understanding, according Thích Nhất Hạnh, it is only then you can actually demonstrate care and love for the other person, to practice what he calls "compassionate listening," which I think is also very important when we think about how to translate that into computational models.

When it comes to this issue of understanding it thus stands to reason that we cannot just be basing ourselves in data because if understanding means that we have to be able to be willing to throw away our knowledge, we need to also be willing to question whether the data used in training these large computational models actually gives us the full and whole picture of what it means to be human.

PASCALE FUNG: First of all, let me answer from a personal point of view. I agree with this notion and this definition of understanding as a human being. I agree with it strongly and deeply.

I think I have gained new understanding of myself during the last couple of years of COVID-19, relative isolation, and being stuck in one location all the time. I have had a lot of time to reach inside myself, to understand myself better, and to understand my fellow human beings too.

I would say it's like an iterative process or it's more like a symbiotic process, understanding myself through understanding others and understanding others through understanding one's self. Actually this is something that I feel strongly about and I think that is deeply, deeply human.

I do want to make a distinction between this type of retrospective and deep inner understanding that humans possess. In other words, if I don't talk to you and I am not talking to anybody, I am just by myself, isolated, and meditating. I am still understanding myself, but when a machine is not doing anything there is no such thing as understanding of any kind. So there is a big difference between what humans consider understanding and what we call understanding in machines.

The field of artificial intelligence, starting with its name, "artificial intelligence," unfortunately uses a lot of terminology that is taken from the humanities and philosophy, so it can be ambiguous. When we say "intelligence in machines" we do not mean the same intelligence as humans. When we say "learning in machines" we do not mean the same kind of learning process as in humans.

We strive for that human-level performance. We are not sure we will ever reach there because there might be a fundamental difference between the kind of understanding that humans possess and the kind of understanding that we endow machines with. I make that distinction up front.

After that, when we then build systems to learn we are actually basically giving lots of data to systems and telling the system to basically mimic the output given input. A lot of people call this a "black-box" process. It's a connected network called a "neural network." "Deep learning" is actually another name for neural network architecture. It is a connected network of different mathematical formulas and mathematical models.

They learn from lots of examples of input and output. For example, machine translation learns from a lot of examples of translated sentence pairs, source sentences, and target sentences. It just learns. The way deep learning learns is not through—you are right—explicit rule-based learning or symbolic learning. It is not learning the grammar of English or the grammar of French before it does the translation, it is just looking at a lot of examples of mapping between English and French and learns it I would say almost like how a child learns a language.

I have learned different languages in my lifetime, and some of them I learned without explicitly learning the rules as a child. My native languages are my three native Chinese languages and some others I learned through going to school maybe for one month or two months and then just learned from practicing it with people.

Even humans have two kinds of ways of thinking, analyzing things, and learning: one is more reflexive and intuitive, the other is more deliberate. There is the theory of our cognitive system having two kinds of systems, system 1 and system 2; system 1 is more intuitive and reflexive, system 2 is more deliberate and we need to learn to do certain things, even things we learn very fast.

ANJA KASPERSEN: You are referring to Kahneman's research?

PASCALE FUNG: Right, right. That is in cognitive science, yes.

The new line of research in natural language processing in conversational AI is indeed about system 1/system 2 thinking. Of course, we are just doing this with machines, so we simplify greatly what we understand about humans into what we consider system 1 thinking and system 2 thinking. There is a hope that we will combine statistical machine learning, deep learning, with symbolic learning in a way that is efficient and that can be the next way forward.

But more important than that, we humans are able to learn in multimodal environments. When I learn a language, I do not just learn it from books or reading, doing language exercises. I learn it by living it. I learn the languages by going to that country or living in that country. For me personally that is almost the only way I can really learn. I cannot really learn language from textbooks.

Why is that? "When you learn a language you live it." We hear this all the time. You have all these contexts which help you learn, so it's not just word by word, but all the physical environment or the gestures people make when they are saying certain things, the emotions they convey in the things they say.

Before I learn a language completely I remember going through stages where I half-understand some language and I can deduce the rest from the context and from the emotions people are using. Machines do that too. Our algorithms do the same thing: they learn certain contexts and they learn other things from these contexts iteratively.

I continue to learn new languages because learning language is just fun for me, and I love the process of learning a new language, and I love the outcome of course even more when I have learned a new language. It gives me great joy to be able to converse with a taxi driver in Mumbai just to tell him where I want to go and chat a little bit using the limited Hindi I learned. I feel this is a great gift of human intelligence. We are the only species that has language, and that is the most direct way we use to represent our knowledge, our intelligence, our emotions, and the rest.

ANJA KASPERSEN: Those are very interesting perspectives.

Now, Pascale, you have stated in a few articles and a few speeches that I have heard you give that AI has a gender problem. Can you elaborate on this for our listeners?

PASCALE FUNG: Yes, yes, indeed.

I was a science fiction fan as a little girl growing up. I remain a science fiction fan. I was surprised when I encountered people in computer science who told me they had no interest in science fiction. That just struck me because that was my world. Of course you want to build robots because you love science fiction. But there are people who don't like science fiction, and their motivation of doing AI research is totally different from my initial motivation, they really just want to help humans and so on.

I realized that in a lot of the science fiction literature the protagonist is usually male, it is often told from the point of view of a young male protagonist who is eager and struggling against the machines, fighting against the machines, or might be a machine himself. And then you have all these fantastic classic science fiction books, such as all the science fiction books written by Philip Dick, and the classic science fiction movie Blade Runner that is based on one of his science fiction novels.

I was deeply immersed in this kind of culture after my family moved to Hong Kong from China. I mentioned earlier that I was in China when I started reading science fiction, but when we moved to Hong Kong I had never seen this kind of science fiction before, and I became immediately attracted to it. I became a "Trekkie;" I loved Star Trek and Star Wars, and I became very immersed in this world. In this world there are women, but we are a minority. Most of the people who love science fiction and are science fiction fans are male.

Then, I remember being in secondary school, I went to an all-girl secondary school, and I was about the only girl who was interested in electronics and computer science. It was just my interest. I was not thinking of pursuing science as a way of making more money or so on, that was not my consideration, I just was into building cool stuff.

I was in this girls' school and I was the only girl really—I started this electronics club in the girls' school, but I very quickly became the president, secretary, and the only member of the club—but that was the beginning of the real learning of computer science.

Then I went to college, so I went from this all-girls high school to electrical engineering in the United States in college. The first class I went into I remember I went into a lecture hall of 200 students and it was almost all male students. So I went from an all-girls high school to an almost all-male major, and that was a quite startling change, but I guess I was oblivious. I remember I chose electrical engineering as a major when I was an undergraduate because that was the coolest major. That was the hardest major for people to get into. Naturally people told me, "Girls don't do electrical engineering," and naturally of course I wanted to do it.

But in the 1980s computer science as a separate discipline spun off from electrical engineering, so computer science was a new discipline in the mid-1980s, and that was the time when, because software and electrical engineering focuses on hardware, computer science was seen as "soft" and there were more women in computer science than in electrical engineering. I was like: "I want to do hardware. I don't want to do software."

But that was also the time when there was the highest representation of women in computer science, around 30 percent, and this percentage of women has been dropping ever since.

What happened in the mid-1980s, if some of your listeners remember, was the beginning of Apple Computer Company and Microsoft. I think Apple's initial public offering (IPO) happened in the mid-1980s and was the biggest IPO ever. The culture of Silicon Valley venture capitalism started then: the whiz kid who started his (usually it's a he) tech startup with his buddy (usually another he) in a garage. That kind of story started then with Bill Gates and Steve Jobs with their partners. So that was the beginning of the Silicon Valley culture of male founders of computer science-based startups, tech companies. Today we see a lot of these companies becoming the most valuable companies in the world.

Following the 1980s we had this Internet boom in the late 1990s and early 2000s. Again we saw these male founders of tech companies, Internet companies then, who became very, very successful. You've got your Jack Ma, Elon Musk, and Jeff Bezos from this generation of founders.

So the story of computer science then became more male-centric because these founders are all male and the VCs who funded them are also male. It became a cycle because investors would choose to bet on people who they think can replicate the success of the other founders. From their point of view they want to take the least risk for the most success, so it is understandable that they look for the next Steve Jobs and the next Bill Gates, and that cycle of male-centric computer science started then and it continues today.

In AI the same story happened when we had more success with AI, commercialization of AI products, of language products. Before in the field of computational linguistics there was also a higher representation of women there, but once something became lucrative, it became money-making or even more than that, then the boys come in, more men come in. It is not like there are fewer women, it's just there are more men and a lot of the machines we make are female. In fact that is not far from the truth.

A lot of machines we make, a lot of androids or human-form robots people make, such as the famous Sophia that has robotics and the famous Erica from Professor Ishiguro in Japan, are both female.

When I make our machines, our conversational agents, I remember the first one we gave the name Zara, and then Nora, and they are all female form as well. That is another side of the gender issue in AI. We tend to pick the female form when we build assistants or companion robots.

The other side of robotics is people who are building the Boston Dynamics kind of robots that can carry loads, jump over obstacles, and can go to dangerous places and detonate a bomb, Those robots people build tend not to have any facial features or facial expressions and they tend to take the very muscular physical form, which is not actually needed for its performance or function. It is just these kind of gendered robotic roles. You have these female social robots that have empathy and talk to you and so on, and then you have these very muscular-looking robots with no head, no face, and perform physical chores.

I think it is heavily influenced by science fiction. I myself can say that I am heavily influenced by science fiction. Then science fiction in turn is influenced by what we build, so it's a symbiotic process, science and science fiction. I used to tell people science fiction and science have nothing to do with each other. Today I must admit it's not true. These two fields do influence each other a great deal.

In my own field of computational linguistics, over the last 30 years I just see when I go to conferences every year there are more and more men. There are not fewer women, there are also more women than before, it's just that there are more men than ever, so the percentage just skews to more men.

With the recent proliferation of AI applications, however, I do see a promising trend of more women Ph.D. students coming into AI because now these young women see—I hope, and they tell me they do see—that there is a direct impact of the technology we build to the society.

Before AI or computer science was seen as something like a game you play in your lives. I remember other students complaining to computer science Ph.D. students that we were just playing games. Our research seemed like games. When I was talking about building systems that could talk to you, they say, "You're just playing, right?" But today these systems actually have impact to society, to healthcare, to the financial industry, actually to every industry, and I think because of that more young women become interested in AI and computer science. So I do see a recent trend of more young women coming into the field, and I hope that trend continues. There also are more men coming into this field, but I do see the promising trend that more women are coming into our field now.

ANJA KASPERSEN: We need to make sure that it becomes a safe space to thrive and to build careers for themselves in this space.

PASCALE FUNG: Yes, we need to.

The major professional societies and major professional conferences today in the AI area tend to have diversity inclusion aspects of it to make sure. It is not just about the women, but also underrepresented groups of people in research. Today we have the AI superpowers—the United States and China, and the European Union as a bloc is also a major player in AI research and development—but what about other countries in the world? We also have underrepresented groups and countries, so there is AI inequality, and we hope that does not translate into more inequality in society. That is also a very important issue we need to work on and to improve, so for women and other underrepresented groups we need to give them a safe and an encouraging space, and also we also need to give them resources.

When I was in the girls' school we actually didn't have computer science for the high school girls, but there was computer science for high school boys at the time. Today in my daughter's school both boys and girls can study computer science, so that is great. We should give equal opportunity to everyone. That is a good change and a necessary change for us to see more representation in computer science and AI.

ANJA KASPERSEN: I know this is something that you have been thinking deeply about, and also cautioning against these type of technology-deterministic narratives that sometimes disempower people to engage with the ethical challenges some of these technologies and scientific breakthroughs represent.

Do we understand what makes us human sentient? Is it just the combination of our sensory perception and the thinking process, or is there more to it?

PASCALE FUNG: I mentioned earlier that there is a fundamental difference between humans and machines, no matter how machines seem to behave more and more humanlike.

The ability to question our own existence and to want to understand more who we are and where we came from is fundamentally human. I don't see any future where machines will start doing that. I don't even know how to encode that into machines. Where do we have that, why do we have that, and how do we have that? We don't know how to encode that into machines, so that is fundamentally different.

Again, machines can have the ability, the functionality, of performing certain tasks. Even empathy in machines is a task its performing, it is a functionality that it needs in order to complete a task. It does not come from its "good heart" because it doesn't have a heart. I know that people call machines with empathy "machines with heart," but that's just a metaphor, it's not the reality. There is a fundamental difference between intention that humans have, our introspection, our questioning, our compassion, and so on. All that machines don't have. We do not have the ability to encode that into machines yet.

ANJA KASPERSEN: Pascale, in a recent article you asked a very important question which has been on everyone's mind, and you alluded to this earlier as well. The name of the article was "Can China and Europe find common ground on AI ethics?" I guess we could also widen that to include other continents and other countries investing in AI research.

Can there be common ground, and, if so, what should and could that look like in your view? I know you have been thinking a lot about the culture differences between different regions as they are developing AI and values that are being embedded.

PASCALE FUNG: Right, yes indeed.

Different societies have had different views about robots and AI. There were studies done that have shown that in Asia people tend to be more receptive of AI and robotics, that we tend to view robotics and AI as being helpful. My own personal experience is directly linked to that. When I was reading science fiction as a child, the robots I read about were always friendly robots, they were fun, they helped you do different chores, and so on.

Whereas in the West there is also the separate tradition of cyberpunk, a dystopian science fiction genre, which I am also very attracted by. The one I mentioned, Blade Runner, presented a very bleak future of repressive machines and rebellious humans. That dystopian vision of not just robots but machines and high-tech in general and government in general is a view that has been quite prevalent in Western literature and media. So it's natural that people in the West, if they grew up with that kind of representation of our future, would be influenced by it. They would be more wary of AI's potential to do harm to us.

Whereas in the East—in China, Japan, and South Korea—we grew up with machines helping us and machines as a means of elevating the entire society to become better developed and go from a developing country to a developed country. In Asia machines and automation have always been seen as a tool of importance and a tool of social good.

Today a lot of Asians have lived and have been educated in the West, somebody like myself, and then others in the West have been adopting a lot of Eastern culture, Eastern philosophical thinking, Buddhism, and so on. A lot of people are familiar with different ways of looking at the world, so I do see there is mutual influence and things are not so binary.

If you look at the EU and Chinese guidelines on ethical principles for AI development, there are a lot of commonalities. Chinese study what comes out of the European Union very closely and does follow it a lot, I would say influenced and inspired by the EU principles a lot. You know why? Because Chinese engineers, Chinese AI engineers included, and the entire Chinese central government leadership were trained as engineers.

And what is engineering? Engineering is a Western invention. It is based on science, and science came from the Scientific Revolution, from the Enlightenment era. So the paradigm of scientific thinking, scientific approach, and engineering is Western in a way, and people from China, Japan, or the rest of the world who are trained in this discipline have surprisingly similar backgrounds as those who are trained in the United States and anywhere else.

There is actually a very common language between scientists around the world, people who make AI. We might not have the same philosophical language from our own cultures, but we certainly share a common scientific language which shapes the way we do science and shapes the way we carry out research and how we make AI products.

The world of AI and AI-making is way more unified than people think. Researchers share commonly between them. There is a free flow of ideas through publications. One idea that comes out in one place in the world, if it is a good idea, gets picked up very quickly by people in other countries. The research and development of AI is a very unified global effort.

It is very much like the effort to find vaccines for COVID-19, where scientists around the world worked together, and they worked on each other's discoveries and they communicated with each other. They used DNA sequencing of the virus that was published by Chinese scientists, and that was used by scientists in other parts of the world to come up with the mRNA vaccines.

The science of AI and the making of AI technology is very global and very cooperative already.

ANJA KASPERSEN: Does it surprise you, given what you just said, that there is such a language of geostrategic competition and also national competition in how we speak about AI?

PASCALE FUNG: Yes, but this phenomenon is only on the political, geopolitical, and economic side, not at all in the scientific community. Before AI became an important technology of economic growth nobody talked about AI in a competitive manner either.

But it has become such an important technology—very much like nuclear technology maybe in the 1960s and 1970s—the technology itself has no politics, no political meaning, but the application of the technology in governance, in economic growth, and in the defense of a country is invariably linked to its politics. Therefore, I am not surprised to see the language of competitiveness, the language of competition really, and even the language of an "arms race" of AI being, used in the public sphere today. But I can assure you it is far from the reality of how we do AI research and development. It is not at all like that.

I do feel sad and a little disappointed that the technologies that we work on with the hope of benefiting the whole human society, the entire world, can be and perhaps could be used as a political tool and as a tool to drive countries further from each other. I do not hope to see that. I hope that AI technologists like myself and my colleagues can also, through what we do but also through our advocacy, bring a message of peace building with AI and not the other way. That speaks to your specialty.

ANJA KASPERSEN: Yes.

I have heard you say, Pascale, that as the technology matures, which it has definitely done in recent years, we now need to worry much more about how it affects society in both beneficial and adverse ways.

You yourself have been a tireless champion to ensure that ethical considerations in both the development and deployment of AI are taken seriously, and in this context you have called for global governance frameworks.

I think for anyone who has been working in the space of creating global frameworks or bringing people onto the "same sheet of music," to put it that way, to have shared regulations, there also needs to be a shared realization about their societal implications. You just referred to that as well in what you said about the different landscapes and that the language of division, the language of competition, was not one that resonates with the scientific community in the same way as it manifests itself in the political discourse.

Are we making headway on this? Do you think we can come to a stage where we have a shared understanding, a shared realization, or are we, with the plethora of principles and good efforts that seem to have very little focus on implementation, at risk of failing at the ethics of AI in your view?

PASCALE FUNG: I would say since maybe a few years ago, around 2017, there has been, like you said, a plethora of documents published on ethical principles in AI. I think we have now entered the stage where we are seeing operationalization of these ethical principles in companies. Some of these things principles have become concrete regulations and laws. There is the General Data Protection Regulation (GDPR) in the European Union and recently the Chinese government issued a directive on Internet companies with very clear guidance on how they can implement recommendation engines in their systems to give users a choice of turning off recommendation engines, which is a quite extreme measure. So now we do see more operationalization and more realization of these ethical principles and they are becoming law. As we understand them better, as we try different ways of implementing these principles, we should be seeing more of an implementation stage rather than just discussion.

But the technology evolution doesn't stop. A lot of things we discuss, a lot of the regulations and laws today—for example, the one I just mentioned about recommendation engines—are very specifically tailored to what exists today. However, as AI technologies move forward we will need to continue to have this kind of a public discussion, debate, and discourse, and continue to iterate the ethical guidelines just like for bioethics. With new technology there will be new bioethics guidelines. The same is true for AI. As the technologies improve and evolve we need to have new discussions, new topics, and new guidelines. It is going to be an ongoing, iterative process. The technology does not stop evolving.

ANJA KASPERSEN: Which is a nice segue. You mentioned bioethics. Some would argue that we have only scratched the surface of the transformative impact and potential of AI, especially as it is being utilized much more for medical research, obviously not without its ethical challenges with regards to privacy, safety, and security.

Pascale, you have a very personal story that guided your interest in applying your skills to medical research. I was wondering if you could share more of that with our listeners and also your views on AI for medical research more generally.

PASCALE FUNG: Sure. I mentioned that my interest in AI came from my love of science fiction and robots, kind of like a child's interest of building something fun and cool, no more than that. Then my interest in mathematics and the later on statistical modeling made me do more research into the particular methodologies we could use in machine learning. That was also out of personal interest, a love of mathematics and the love of making something that has scientific support with scientific methods.

That was my approach to what I did and my research for decades until in 2015 I was diagnosed with cancer. Very quickly after I was diagnosed I had to be hospitalized, and I went through three surgeries, and then I recovered.

Throughout that process, from diagnosis to post-surgery, I had to go through different kinds of treatment plans. My oncologist, who is also himself a researcher, shared research papers with me knowing that I am also a researcher. I took an interest in my own cancer as if it was a scientific project. It became a research project, so I read tons of research papers on my specific type of cancer, different treatment plans, and whether we should use this treatment or that treatment based on the published scientific findings. I am very well today. I realized that other people's research saved my life and they continue to save the lives of millions of people around the world.

Then I had an existentialist crisis in my own work. Other people's research saved lives. I was just building something I thought was cool, and it became abundantly clear to me that was not enough, that I needed to have a new purpose for my new life that was saved by other people's research. I started questioning everything I was doing, whether I was doing it for a purpose, and how I should live my life.

Long story short, my oncologist and I talked about what I did, and he said: "You know what? Machine learning can help us with cancer diagnosis and cancer treatment."

I jumped on that. I'm like: "Of course this is what I should do. This is my purpose. This is what I can do to help other human beings and help a little bit in saving other people's lives."

How to use machine learning to improve health care became my passion from that point on. Before then I was never interested in medical school or medical studies. Chinese always want to become medical doctors, but I just wanted to be an engineer who built robots. But from that point on, I realized that I really felt passionate about using AI to benefit humans and society.

A lot of my colleagues share the same passion, and that has given us a newfound purpose in what we do. I think today a lot of us feel energized by this purpose and this mission. In both physical health and mental health there are a lot of start-ups and researchers using AI and machine learning to improve healthcare.

My one big passion remains early diagnosis of cancer and cancer treatment plans using machine learning. This has been met with a huge challenge, which is database. Listeners probably all know by now that for machine learning to work well it needs a lot of data to learn from, but patient data is something that is very fragmented and very well guarded, so we don't have a homogeneous database of hundreds and thousands, let alone millions, of cancer patients of one single cancer type to train our AI systems. That remains a very big challenge, but today there is some research in a new kind of machine learning that will preserve patient privacy—for example, something called federated learning—that attempts to solve this challenge, so I am hopeful that we will get there with cancer treatment, precision medicine with machine learning.

Before we reach there, I started working with a life scientist in my university. Our group and another group of life scientists have been collaborating on using cancer patient data to predict hospital planning and to predict readmission likelihood and so on. We are not quite to the point where we can say, "Okay, given your history you are likely to get such cancer, and therefore you need to do this." We are not there yet, but I am hopeful. I hope that's what we can do.

Cancer kills so many people unnecessarily. I have gone through it. I got the best care that is possible. I went through this research stage with a world-leading research scientist and oncologist together to find the best way to cure my cancer without incurring huge physical costs to my body, and I understood that nobody needs to die from cancer. If we can diagnose it early and we can treat it with the appropriate treatment, nobody needs to die from cancer. That was a big revelation to me, and I will spend the rest of my life pursuing that goal, working with medical professionals, hoping to help in that direction.

ANJA KASPERSEN: That's a very reassuring message to those who are listening who are either impacted by or know someone who is impacted by cancer. Thank you so much for sharing that story, Pascale.

I am so happy to hear that you are doing well now and now you have fully recovered, but also that you are taking that experience and, as you said, combining it with all of your expertise in the AI field and trying now to put it to good use to help others.

Just to shift back a little bit to your core domain, which is around linguistics and language models, there has been a lot of focus lately on the toxicity of some of these language models that are being developed and put into use.

Are you worried about what you are seeing right now where we are attributing all of these capabilities to these rather rudimentary language models obviously for purposes of advertising and generating some hype around to attract venture capitalists, etc.? Is it just public relations hype? Where are we heading with these language models in your view?

PASCALE FUNG: I always say regarding these large language models that the hype is both false and true in the sense that people both underestimate and overestimate these language models at the same time. How is that possible? That is again coming back to function versus intention.

Today's large language models are indeed very powerful, beyond what we could imagine just a few years ago. They are indeed very powerful, to the point that we can use them to build any natural language systems that can answer all kinds of questions, that can write novels, that can write scripts, that can write poetry, that can make summaries, and that can actually write programming and all kinds of things. It is indeed very, very powerful. We have not seen the limit to its capabilities yet. That is one message, which is that it is indeed very, very powerful. It is probably more powerful than what we see today as these language models become bigger.

Because of that there is a great ethical impact of these language models in that they behave so humanlike that they can do things. They can indeed generate text that is harmful, and they can be abused and misused. Those dangers and risks are real, and we need to actively immediately build algorithms to control these language models, to filter its output, to guard, and to mitigate potential harms.

This is what we actively are working on today, and I am appealing to the whole field, to the entire NLP community, to work on these algorithms, to control these large-scale language models. Yes, on one hand it is true these risks are very real, and we need to mitigate these risks.

On the other hand, as I said, we can control them. We are building algorithms to control algorithms because they are human-built, they are built by humans. These language models did not drop to us from some alien being. We built them. We can control them. That is another message. We built them and we better control them. That's what we are actively working on.

ANJA KASPERSEN: It's interesting what you say because increasingly we see that there is a tendency towards almost removing the human from the story, as opposed to reiterating what you just said, the story of AI is fundamentally a human one. It's choices we make, it's decisions we make, it is our choices of applying it in a certain way, and yet we are sort of buying into this more technologically deterministic narrative that this is beyond our control, which is extremely damaging for any ethical oversight.

PASCALE FUNG: Right, right. This trend or this narrative of giving machines agency is a narrative where machines have somehow become this way and the machines are going to have agency to do this harm or do that harm.

Just because they appear to have human-like performances does not mean they have agency. We still have human agency. We are the ones who built the technologies and we are the ones who will build the controls over them. I am a fan of science fiction, and in science fiction machines have agency—because that is the fun part of science fiction, that these machines come out and they have agency—but in reality, in science, machines don't have agency.

Therefore, we need to modify the public discourse and the popular narrative from a science fictional one of machine agency to the realistic one of human agency. We build these machines, we control these machines, we can decide what to build and what not to build, and we can decide to build safeguards and mitigations and locks, metaphorically speaking. We can decide what to build and we can decide how to control what we build.

It's just like we have cars everywhere. Nobody thinks of these cars as having agency because humans will be driving them. Even though some cars have become almost automatic, still we don't think of cars as having agency. But imagine somebody from the 16th century, they see cars running around, so they would be scared to think that these cars are some kind of magical beings.

If we in the scientific community can continue to demystify what we build, the algorithms and so on, for the public, I think we can change the narrative from that of machine agency to human agency again.

ANJA KASPERSEN: Do you feel that we are sufficiently ready and able to have discussions around the limitations that these systems present, so not just the risks but also the limitations of functionality as we are continuing to deploy rather immature systems into our daily lives, which of course comes at a price?

PASCALE FUNG: I think you are spot-on on that need. Today whenever we get medicine from any drugstore or whatever, it comes with a disclaimer of side effects, it always tells you the side effects. We need to do the same with AI and with robots. We need to explain the limitations and potential harms, just like drugs, and the potential side effects of what we build and what we are using so that users are aware of it, mindful of it, and they can decide whether to use it or not. This is something we need to do in AI and robotics, which is to explain the benefits but also the limitations of what we are putting in people's hands, homes, and workplaces.

I would like to see smart speakers come with an explanation of how you should turn it off, when the speaker is listening to you and when it's not listening to you. We need to have more explanations of that kind for the things that we put in people's hands, homes, lives, and work.

ANJA KASPERSEN: As someone who actually builds these systems, it begs the question: Can you safely interrupt these systems once they have been deployed and embedded?

PASCALE FUNG: The responsible thing to do is to give the user agency to disrupt it. A recent development is that the Chinese government mandated Internet companies or AI companies to allow users to turn off recommendations. It is possible, we just didn't think of building that before, and now you must do that. So, yes, it's possible.

We need more guidelines into these so-called "ethically aligned design principles." When you design these products what are the things you need to follow? It is the path toward more operationalization of responsible AI, and it is coming. We are slowly and surely taking the steps to ensure safety for the users of AI products and systems.

ANJA KASPERSEN: You mentioned early in our conversation that when it is embedded into more military applications it might be part of a much bigger systemic response. The question is: Is it possible in the same way?

PASCALE FUNG: First of all, I think it is always possible. Second, it is actually less of a technology issue and more of a governance issue.

I am happy that you brought up the military application of AI. I think it is a very good example or a case study to illustrate this point of governance versus technology development. I was actually on the panel at the United Nations facing all the ambassadors from 200 different countries in the discussion on lethal autonomous weapons.

It turns out that military applications of AI, like military applications of any technology, fall under the purview of the Geneva Conventions. It's a case where people designed the governance of any new weaponry that could potentially come into place many decades ago. They have in the Geneva Conventions various clauses governing how weapons should be used, how they can be used, when they should be used, and when they should not be used, and so on. That applies to any weapon, including lethal autonomous weapons.

There you have fortuitously an existing governance structure already, and then it is a matter of mapping AI technology to follow this governance, follow the Geneva Conventions. It is in my view actually a more developed field than many other industry applications of AI in terms of governance because there is a very strict agreement. The Geneva Conventions is actually a very good model for AI governance.

Now, from there you then map to when and how you should build automation into weapons systems and when and how you should have humans in the loop or you have human control and where you can allow for some degree of automation and autonomous decisions made by machines following the Geneva Conventions. Of course the whole story is not that simple, but still it is a very good case of where technology follows the governance.

I wish we had that already in every industry. Let's talk about the health industry again. For any drug to get approval there is an established due process—there is approval, agency, and so on—and for any medical professional to practice medicine they have to be certified and recognized. So the health care industry is a heavily regulated industry.

The application of AI there also has to follow the regulations of that particular health industry. What I mentioned about the challenge of getting data from patients is exactly a reflection of strong governance in that industry. It's not an industry where anything goes. You cannot apply AI just because you want to and you can. You cannot. It is governed.

Also, deep-learning algorithms cannot be used in many of the diagnostic situations because if they are not explainable the doctors will not accept them. So it is another case where the governance comes before technology development. It is not that we cannot do it, but governance comes first, so regulations come first and explainability comes first. This is another industry where there is this interplay of governance and technology.

My view is maybe not the same as a lot of the authors of the papers you mentioned. I think the technologists and the policymakers can work together, as they have worked together in the regulating weapons, regulating drugs and medicines, and in regulating the health industry. It can be done in any industry. Even in the financial industry there is heavy regulation on what can be done and not done using technology or without technology. It's industry-dependent regulation.

For example, the Internet industry—I don't know what you call them, the new economy industry, the new Internet industry—because they are so new there is little governance. This is where the industry develops so rapidly before any governance is set in place, so that is why you see governments everywhere in the world catching up to the Internet companies. They are issuing regulations after the fact—the regulation on recommendation engines and so on—and oftentimes to come up with these regulations they need to involve the people who actually built the systems to write the regulations. This is a particular industry where the development was so rapid that the regulations are now catching up—data privacy and so on. I think that is the area where people are more concerned because the Internet industry just developed so rapidly.

But we are slowing down. We are not slowing down the technological innovation in the Internet industry, but we are being more responsible and being more mindful.

I am again optimistic. There are problems, there are issues and challenges, but we can solve them. We can meet these challenges. We can work together to make things better.

ANJA KASPERSEN: Pascale, I know that art is very important to you and you said in your introduction that you grew up with parents who were very dedicated to the arts. How has that impacted on your work and your interaction with these technologies that do take the form of artwork sometimes?

PASCALE FUNG: I think from my background—my parents are professional artists and from my love of science fiction—you can probably deduce that I love art. I am an art lover and I believe in the power of art in pushing the envelope of human development. Just like sports, just like the Olympic Games you are witnessing, they represent the aspiration of humans, and art is the manifestation of human aspirations as well.

I have hope in using art to push the development of AI actually. A lot of people have talked about using AI as tools for artists—for example, AI can generate paintings now, artists can use AI to compose music, and so on, that goes without saying, that is very obvious—and AI can be used as tools for artists just as it can be used as tools in other industries.

But my hope is to see art pushing innovation in AI. As an example, my students and I work on natural language generation, and in natural language generation one application we can use it for is to generate scripts and novels. I also touched upon that earlier.

Recently I actually collaborated with a famous artist in China. His name is Xu Bing, a conceptual artist, an artist of everything, who does great work. We recently collaborated and made the world's first fully automatic AI film, which generates scripts automatically, and then the system then goes into the archive of recent news clips, social media clips, and surveillance clips and generates the images that correspond to this automatic-generated script. It makes a first movie that is both infinite and without any human director, producer, actor, composer, and so on.

Why do we want to do that? Because we want to see how we can push this technology to create something that has never been created before. Throughout this collaboration we actually had to come up with new algorithms of natural language generation, so this is a very good example of art pushing technology versus technology pushing art. I am a big proponent of using art to push where we can go.

I don't want to say it's to develop the technology. It's not. It's to develop simultaneously the art and the technology. In this case the technology is the art and the art is the technology. It is not two separate disciplines. It is not two separate components. It is one. That is something I'm passionate about recently as well.

ANJA KASPERSEN: I guess it goes back to where we started off: How do you teach empathy, how do you teach a machine to interpret complex human emotions such as humor and other things, which requires a level of creativity? There have been a lot of discussions whether an AI can ever be creative or even do we want an AI to be creative, but what you are saying is that you can use art maybe not to give it attributes of being creative, being conscious, or giving it agency, but simply using art as one means and method of teaching it about empathy.

PASCALE FUNG: That is very interesting. I didn't think about it that way.

Actually these large language models have become quite creative. I mentioned that they can be prompted to program by itself. They can be prompted to create scripts, as I just mentioned, automatically from scratch. The scripts don't look like normal scripts yet. It is the first generation of AI scripts, so they—imagine the silent films we had at the beginning of filmmaking—are still very primitive, but one day they will become very good.

And they are very creative, but again they are creative because we humans made them so. The agency is still ultimately human agency. We make them creative. Philosophically, yes AI can be creative, but still they are not the ones who decide to be creative. That is still controlled by humans.

ANJA KASPERSEN: It is an artistic representation of who we are.

PASCALE FUNG: Indeed. I know there are terms combining technology and philosophy—they call it "technophilosophy." I have not seen a term—maybe it is time to come up with a term for that.

ANJA KASPERSEN: Technoart?

PASCALE FUNG: AI art? We call it "AI art," which is both AI and art. AI art is this new form of art that has is coming into play. Just like previous generations of media art, digital art, photography, and print art that became new forms of art, AI art will become and is becoming a new form of art.

ANJA KASPERSEN: My last question for you, Pascale, is one that I have asked several other guests as well on these podcasts, which is: As someone who has definitely been for many years now at the vanguard of change in the AI research domain, what would be your advice to those grappling with ethical dilemmas or even navigating your professional journeys?

PASCALE FUNG: I think what I would tell the younger version of myself is to find meaning in what I do. Science fiction will remain an inspiration for me. I continue to be inspired by science fiction and philosophy. I didn't talk about philosophy, but actually I read a lot of philosophy and I learn from philosophers.

I think passion without meaning can lead to disaster and passion without meaning can lead to failures, so for young people who are either doing AI start-ups or trying to pursue a career in AI I would advise them to find meaning in what they do. Why do you want to do that?

It can be anything. It can be, "I want to save other people's lives," it can be, "I want to cure cancer," or it can be pushing the limit of something. To have passion but also to have a purpose.

I think passion without meaning brings a lot of pain. What do I mean by that? I did startups also in the 1990s and all that. I did it with blind faith and passion, but I was not sure what I was doing it for other than changing the world. We can change the world for the better or we can change the world for worse.

I think if every one of us finds a positive meaning of what we do combined with our passion for doing it, then together we collectively can really change the world for the better. My god, the world needs to be improved, from the environment, from the health crisis we are facing today, from energy, from mental health issues, from politics, and from people's radicalization of different beliefs and conflicts between different cultures. We really need a lot of help.

If we can find meaning in what we do and use our passion to focus on improving the world, to make the world better, and move the needle in a positive way, then collectively our children will live in a truly better world than what we live in today.

That is my hope and that is the meaning that I have found in my life and in my work. That is what drives me and many of my colleagues to work tirelessly and to devote our energy and our passion to do what we do. It's not about publishing a lot of papers. It's not about being recognized as this owner or that owner. It's not about getting paid at this grade or that grade. It really is about how we can make the world a better place. I believe those who strive to make the world a better place also succeed in what they do, including startups.

To the investors who are listening—I have also worked with a lot of investors—for you to find the successful start-up you should also ask, "What is the meaning in what they do?" If they do have a meaning, then they have true value to the market and to the world. Then they are worth investing in because they will succeed. This is what I want to say at the end: Find meaning in your passion and find meaning in doing AI.

ANJA KASPERSEN: Thank you, Pascale, for sharing your time with us, stories from your ubringing, and deep expertise. This has been a captivating conversation.

Thank you to our listeners for tuning in, and a special thanks to the team at the Carnegie Council for hosting and producing this podcast.

For the latest content on ethics and international affairs be sure to follow us on social media @carnegiecouncil. My name is Anja Kaspersen, and I hope we earned the privilege of your time. Thank you.

You may also like

NOV 10, 2021 Article

Why Are We Failing at the Ethics of AI?

As you read this, AI systems and algorithmic technologies are being embedded and scaled far more quickly than existing governance frameworks (i.e., the rules ...