Creative Reflections on the History & Role of AI Ethics, with Wendell Wallach

May 26, 2021

How is the new global digital economy taking form? What are the trade-offs? Who are the stakeholders? How do we build “participatory intelligence”? In this wide-ranging AI & Equality Initiative podcast, Senior Fellow Anja Kaspersen speaks with Carnegie-Uehiro Fellow Wendell Wallach about the history of computational and human ethics and their synergies and conflicts, the growing impact of AI on society, how to make sure that this technology works for everyone, and much more. Wendell Wallach has occupied a unique role in the evolution of AI ethics and shares creative insights on how we ought to tackle the challenges brought to the fore by the bio/digital revolution.

ANJA KASPERSEN: Hello and welcome to an exciting new podcast from the Carnegie Council AI & Equality Initiative. I am Anja Kaspersen, a senior fellow with Carnegie Council.

WENDELL WALLACH: And I’m Wendell Wallach, a Carnegie-Uehiro Fellow. This podcast series will confront the issues of systemic inequality and newly created inequities as they arise in the deployment of artificial intelligence (AI).

ANJA KASPERSEN: We will bring together diverse voices from leaders in the field to help us understand ethical challenges we are facing today and convey practical insights that promote equality and identify ways to mitigate unanticipated harm.

WENDELL WALLACH: To learn more about this initiative and access additional podcast episodes, visit our website at www.carnegieaie.org.

ANJA KASPERSEN: We hope you enjoy the show. Thank you for listening.

I have been looking forward to having this conversation with the wonderful Wendell Wallach for a really long time. Wendell is, as you heard in the intro to this podcast, the co-director for this Initiative. Wendell has also been a mentor of sorts to me and to so many other people in the domain of technology and ethics. What we often forget in our eagerness to understand, to lead, and to codify is the larger historical context underpinning scientific discovery and the evolution of the field of ethics and particularly tech ethics.

Wendell is a remarkable polymath with an extraordinary career. His journey started in the civil rights and anti-war movements of the 1960s and as a stint as a spiritual mentor in the 1970s. He started and led two computer companies during the infancy of personal computer adoption and then moved on to a rich academic career and is recognized as one of today's leading authorities on machine and life science ethics.

In addition to all of this, he has meditated for more than 50 years, although with perhaps a different take on contemporary mindfulness, which he calls a "silent ethics." He is a Carnegie Council senior fellow and a scholar at Yale University's Interdisciplinary Center for Bioethics. Wendell co-authored with Colin Allen a book called Moral Machines: Teaching Robots Right from Wrong, which maps the new field variously called "machine ethics" or "computational ethics," and his most recent book is A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control, both of which we will speak more about during this conversation.

Wendell, a huge welcome. As I alluded to in my intro, I have been looking forward to this conversation with you for years. Can artificial intelligence (AI) be deployed in ways that enhance equality or will AI systems exacerbate existing structural inequalities and create new inequities? You have conveyed on a few occasions that you have at least a preliminary answer to this question.

WENDELL WALLACH: Thank you, Anja. At least preliminarily, there is a fundamental contribution to structural inequality arising from the digital economy and AI that is not being ameliorated at all by the focus upon AI ethics or AI for Good. These are well-meaning initiatives, and I think they arise from the subliminal understanding that the digital economy is truly exacerbating structural inequalities, and yet they are fundamentally weak in comparison to the tradeoffs we are getting from the impact of the digital economy.

I believe all of us know that during the pandemic the digital companies—and not just the big names like Microsoft and Apple, but really thousands of small digital companies—have been flying high on the stock market, and those who have invested in them have really grown their wealth at a rate unseen in all but a few financial explosions in earlier decades. So what we have right now is a time when significant portions of the world's population, both in wealthy and in poor countries, are suffering, and yet the digital economy is thriving. That means that those who own stock, the owners of capital, their net worth is growing exponentially while the net worth of 50 percent or more of the public is already beyond indebtedness, is already in a form where they have no idea how they will get out of the debts that existed before the pandemic and now which have been made even worse by the pandemic.

But this trend was here long before we had the pandemic. The trend is pretty straightforward. The trend is a digital economy that largely rewards a "winner take all." Even in social media and in search we see that there is generally one company that dominates. So that is our first challenge.

The second one has been an erosion of productivity gains that have gone to wage earners, to labor, over the last few decades, at a time when there is a growth in productivity gains going to those who have capital, the owners of capital, those of us with stocks. Of course, we all know that most of the stocks are owned by a very small percentage of the public. In America two years ago the estimates were that 84 percent of all financial instruments were owned by those in the top 10 percent economic bracket.

But what is happening is a situation where every job that is roboticized means that all productivity gains for that job are going to go the owners of capital, and none will go to labor. It is these trends that are forcing a serious exacerbation of inequalities even while we are flirting with the many interesting, significant, but not particularly overpowering means to ameliorate that inequality.

ANJA KASPERSEN: Wendell, in your 2015 book A Dangerous Master, long before we had a pandemic on our hands, you championed that the necessity of maintaining the accelerating adoption of technology within a humanly manageable pace, and you continued asking, "Do we, humanity as a whole, have the intelligence to navigate the promises and perils of technological innovation?" Looking back on the six years that have gone since this book came out, can you elaborate both what you meant by this and how it relates to your thinking now and more particularly to the AI and Equality Initiative?

WENDELL WALLACH: My concerns have been manyfold.

One is that the pace of technological development far exceeds that of our ability to put in place ethical or legal oversight.

Secondly, the pace is such that whenever legislators or whenever governance of any form looks at ways to rein it in, they are told that they do not understand technology and that if they do so, they will undermine innovation and the productivity gains from innovation—and of course that is something we cannot do in the modern age—the struggles for efficiency and that there are so many pressures of funds that we need to support our citizens more broadly.

We are in this difficult inflection point where the speed is rapid and our ability to tame it is poor. Let me give you an example of why I am so concerned here. We are now turning toward lethal autonomous weapons, weapons systems that can take actions with little or no meaningful or direct human control, and the arguments for that will be that we need to do so because we need to counter the threats from other parties. But the reality is that we are being directed to have a reliance on technologies that will make decisions even when we cannot predict what those decisions will be and even when those decisions could be very serious in terms of exacerbating harms.

What I am trying to argue for is, whether this is in trading and financial markets or defense systems or systems in any walk of life, that if the humans do not understand the activities or cannot control them, then they are beyond acceptable, and we need to put in place brakes so that we can hold the actors who implemented those systems accountable for any activity that the systems take.

ANJA KASPERSEN: These considerations you share with us now have been at the forefront of much of your work over the years, and I believe that in your book Moral Machines you speak of what you call "operational machine morality," implying that robots or machine-based systems can be programmed with values and moral behavior and act accordingly within tightly constructed contexts. Can you explain what that is, and if I may, guide us through the history of machine ethics to allow our listeners to better understand both how the field of machine ethics evolved over time and how the field of machine ethics converges with that of the digital economy that you alluded to earlier?

WENDELL WALLACH: Sure, happy to do so.

Moral Machines was a book written to map the then newly born field often referred to as "machine ethics" or "machine morality." "Machine ethics" seems to be the predominant term. It looked at the question of whether we could implement sensitivity to moral considerations within the AI systems we created and whether they could factor those ethical sensibilities into the choices and actions they took.

When we bring up this subject, the first thing nearly everyone thinks about is Isaac Asimov's "Laws of Robotics." Isaac Asimov was writing fiction. He was not engineering systems, but the interesting thing that Isaac Asimov did with his story "Runaround," which was the first one that introduced the Three Laws of Robotics—later he added a Fourth Law—was he radically altered the history of science fiction about robots. Up to that time, nearly every robot was a robot that "went bad" at some juncture. Asimov introduced the idea: Well, maybe robots could be engineered in a way where they could be good. He proposed these rather straightforward, rational rules: Basically, do not harm a human, obey a human, and then engage in self-preservation. These Three Laws are arranged hierarchically. The First Law trumped the Second, and the Second the Third.

This seemed rather logical, but in actual fact they would not work very well. For example, a robot would not necessarily know if a surgeon on a battlefield with a knife in hand was actually helping a soldier or was there to kill that soldier, so should it protect the soldier from the surgeon or not?

The laws presumed a great deal of intelligence that the machines were not necessarily going to have, and this is of course the problem that we have right now. We are developing systems that are being deployed in some pretty serious applications, and yet they do not really have the intelligence to understand what they are doing.

So when we look at the prospect of making them sensitive to ethical considerations, where are we now? Where we are right now is that the value or the ethics that the machines have is largely hard-programmed into them by the engineers who design the systems, and those values more or less emulate those of the companies that they work for, or at least the cultures that they are born into. We call that "operationally moral." They will instantiate certain values by their choices and actions, but they are more or less operations that have been hard-coded into them.

But now we are encountering situations where we don't always know what the AI system, what an autonomous system, will have to deal with. Think of a self-driving car. We cannot always predict all of the situations that it will encounter, and therefore these systems are going to need the ability to engage in some kinds of ethical subroutines to analyze a situation and make a decision about what is the appropriate action. We refer to this stage of development as "functional morality."

ANJA KASPERSEN: You have also spoken about "artificial morality" in some of your writing, and I believe you used the phrase, "The Asimovian Age," positing that technological development is at risk of becoming a juggernaut, beyond human control. If you take those aspects you just shared with us into account, what are humans going to be good for in the future?

WENDELL WALLACH: That may be the central question of our time, particularly if we think that robots and artificial intelligence in general will rival humans in nearly all activity. We are a long ways from that. What we see today are systems that have been designed for very specific purposes—win at chess, win at Go, win at Jeopardy!, figure out the protein-solving problem, or allocate resources equally among this group of people. These are largely logical problems that are within the realm of the systems that we have today, but they lack artificial general intelligence, and yet there is constant hype that they soon will have artificial general intelligence, and the hype goes anywhere from ten years to a hundred years. But even a hundred years isn't that long a period of time when you start to look back at your life, particularly for somebody like me, who is now in my 70s.

So whether it is sooner or later, there is this looming threat of machines that will be smarter than us humans, and if that is the case, then what are we humans good for? What will be our function? Up to now, machines, robots, have largely been used as an intellectual mirror through which we look at the kinds of capabilities they have and how they compare to those we have and in what way they truly will exceed our capabilities and in what way they will be less than us. But there is this outstanding question of: "What are humans good for if the systems exceed us?" and secondly, "If they don't exceed us or between here and there, what are the functions that we should be fulfilling?"

The way this question has historically been put is: "How will we know when a machine is as intelligent as us?" and, "Once it is as intelligent as us, will it very quickly become superintelligent? Will it very quickly supersede human intelligence in a broad way?"

What that question or what that way of looking at artificial intelligence presumes is that intelligence is just one thing and that all of that can be captured within one system. But I think when we look at human beings we understand that intelligence is not one thing, and even though we have these practical instruments such as IQ for measuring intelligence, IQ only measures some facets of intelligence.

Intelligence is a much broader phenomenon. It is largely collective. Each of us exemplifies different aspects of it, but the whole history of humanity and the whole history of culture is a history of collective intelligence working its way through practical challenges that are immediate in the hopes of prevailing, surviving, and addressing more difficult challenges down the road.

ANJA KASPERSEN: Your research leads us to conclude that each and every one of us gives expression, for better or worse, to different forms of intelligence, but none of us are able to give expression to all forms of intelligence. A key tenet in science, especially quantum mechanics, is exactly this, that our best bet of understanding the world around us is not through describing how things are but how things interact with one another. Some would even argue that this is what AI is helping us to do. Others, including yourself, if I understand you correctly, would perhaps argue that the interaction aspects of our intelligence is often what gets overlooked and which could deeply impact on our "social physics," a term you have used in some of your writing, and fundamentally then also with our understanding of equality.

WENDELL WALLACH: The extent to which we are all participating in a social process, a social process of interaction and collective evolution, has been so lost in our stress on the individual and on cultivating those individuals who are superior and ranking individuals and giving undue rewards and incentives to those who fulfill those goals of society as most desirous of having fulfilled. Life does not proceed in that way. It never proceeded in that way. It has always been a process of interaction, and even individual evolution has always been a process of adapting to one's environment, and one's environment includes many other entities. Some of those entities are a threat, and some of those entities we have had to find ways to compensate for the threats that they pose.

But in the human environment most of the people we interact with actually help us in various ways. One of the things that has come out of the pandemic is that we are starting to realize how dependent we have been on low-paid workers who are performing tasks that are ensuring our survival in getting through this pandemic. But in many ways it is the nurses and the delivery boys who are perhaps even more important than the corporate titans who have received tremendous incentives and rewards over the last few hundred years.

So, yes, what I am trying to stress is that placing tremendous importance on the individual, which actually came out of the Enlightenment turn in history, when we turned away from being a god-centered culture in the West and started to realize that actually the individual was important and that we needed to develop a science and understanding of the individual and values that serve individuals, not just the clerical elite or the gods that the clerical elite apparently represented, that this has gone a bit too far. We are lost in a kind of individualism that does not allow us to recognize or take full responsibility for the collective interactive dynamics that we are in and evolving through.

To be sure, artificial intelligence is part of that evolution. Artificial intelligence is already capable of giving expression to certain tasks and certain activities that are very hard for humans to perform. It can beat every grandmaster of the game of Go in the world, or at least it has demonstrated that every grandmaster that has gone against AlphaGo has been beaten. There are forms of intelligences that are being expressed artificially that actually augment this collective evolution, but the narrative we are getting lost in is one where they will have all forms of intelligence and that humans don't bring anything to the table that is of value, and I just think this is wrong on so many levels.

First of all, there are forms of intelligence that AI is very bad at—situational awareness; common-sense reasoning; affective, emotional intelligence; semantic understanding, understanding the meaning of the words and concepts that are being manipulated rather than just syntactical intelligence, where it manipulates the symbols, the words, without appreciating what the words may stand for.

These are capacities that we cannot implement today, and it is anybody's guess when we will be able to implement them, and they may not even be natural for the artificial platforms we have created. The artificial platforms are largely logical platforms, whereas human beings have evolved out of an affective biochemical regime, out of a biological regime in which feeling has been more important than reasoning, and that reasoning actually came relatively late, even in our humanoid evolution. The point I am trying to make here is that there are forms of intelligence that still reside with humans and will for the next few decades, if not much longer.

But I also like to stress other forms of intelligence that we do not fully acknowledge or recognize. Those that are of particular interest to me are self-understanding, which is a form of intelligence, and moral intelligence, the ability to work with very challenging moral dilemmas. But those are not the only forms of intelligence out there. Think about "street smarts." Most of us would not be able to survive if there was a breakdown of urban environments. We don't have the street smarts for it.

ANJA KASPERSEN: I would like to come back to your personal journey, Wendell. I wonder, as you look back, can you trace your curiosity in philosophy, science, and history in a rudimentary form to some of the things that fascinate you now and that you have spoken about so far?

WENDELL WALLACH: Of course, this is one of the things that one does in their older age. They spend a lot of time reflecting on "how I got to be the way I was and what my life was all about and whether it really had any meaning or not." I have certainly done some of that. There is always a difficulty here in that we tend to stress the most dramatic moments in our lives, the epiphanies, the experiences that seem so formative, when sometimes those may be less important than our day-to-day activities.

For example, I had a small epiphany when I was 14. It was a situation where my mother had taken me to a high school, which happened to be ten miles away from our home. I thought it was understood that she would pick me up from the basketball game that was being played that night at the high school after it was over, so I did not try to get a ride back. Suddenly late at night there is no one there at that high school, and there is no mother coming up the access road to this high school, which happened to sit on top of a little hilltop in Northwestern Connecticut.

I was a little too intense for my young age, a little too serious. I was frankly freaking out. Suddenly this thought appeared in my head, as if from nowhere: This is not doing any good. This isn't making any difference. This isn't changing anything. Accompanying that thought was a kind of deep quiet.

I made my way down the hill, I saw lights on in a little brick textile factory across the main road. I knocked on the door, and I gave a call to my mother, and I said, "I'm waiting." Of course, she jumped into the car and came and picked me up, but it took about half an hour for her to arrive because there were serious back roads for her to travel.

This incident was soon lost from memory. I gave it little or no attention. But suddenly it has loomed very large in recent years, and it has loomed large for a lot of reasons. One of the reasons is that I became very interested in the nature of cognition and the inverse relationship between our ability to think and our ability to perceive. The more we focus upon thinking, the more we turn inward, and that distracts from our ability to just be aware of our environment.

So quieting mental processes, thought and thinking, loomed very large during the early years of meditation and time I spent in India. It loomed very large because many people believed that was the access to altered states of mind or at least those kinds of states of mind such as "flow" and "oneness" that have been romanticized by spiritual people over the centuries.

I realized that that little moment on the hill, what I call my "Wamogo night" was two things. It may have actually been the first time in my life I ever became self-aware, that I was not just lost in myself but in some way I stood outside of myself and became aware that what I was doing was not really functional. The other is that it did take me into a place of silence. By the time my mother showed up in her little Volkswagen convertible, she was in her nightgown, furious, and I just quietly got into the backseat and let her go on as we drove home. It didn't matter to me. I was safe. I was on my way home.

So that was an early epiphany, but now I have meditated for 50 years. There was a period in the 1970s when I even functioned as a little bit of a spiritual mentor for many other people on spiritual pathways. It seemed I was walking a bit ahead of others, and I tried to share my insights with them.

These kinds of moments, these moments of inner quiet, became commonplace, and pathways to them became commonplace. That has given evolution to a kind of way of looking and exploring how life unfolds that I like to refer to as "first-person cognitive science."

Let me put that term in context and put the times in context. First of all, I was born in 1946. I am on the early cusp of the Baby Boomers. Those of us on the early cusp of the Baby Boom were in the womb when bombs were being dropped on Hiroshima and Nagasaki. It wasn't just our numbers. The Baby Boomers were a very large generation, but throughout the evolution of my generation there was a sense that there was something a bit different about us than those who went before. Whether that was really true or not, we did romanticize that difference.

When we got to college, psychology was dominated by a field called behaviorism, and behaviorism was largely about, "Well, you have inputs to a system, and then you have outputs, and we can't really know what happens within the mind, we can't really know what happens within the human body." Behaviorism dominated most intellectual thoughts. It went back to the 1920s with work by researchers such as John Watson and later by B. F. Skinner. Its basic principle was that the mind is a black box.

What happened in our generation—and it actually started happening even a few years beforehand with the Beats—was this experiment with drugs, this infatuation with altered states of mind and altered states of consciousness, and the birth of an approach to psychology known as cognitive science. So we are now in this age of cognitive science, but cognitive science has largely been third-person. It has largely been applying the scientific method to what we can understand about human cognition. It has been very skeptical about whether or not we can have any true understanding of how humans function.

Counter to that has been a generation of serious people engaged in serious forms of meditation and introspective processes, and they have tried to see if they could reveal first-person cognitive science, a cognitive science where one came to at least a degree of understanding if not mastery of your own cognitive processes. That is what I refer to as first-person cognitive science. It is really the awareness of thoughts, feelings, thinking, states of mind, and trying to appreciate the dynamics and the relationships between these different elements of our psychological activity.

ANJA KASPERSEN: I think we all can agree that this age of digital media, digital tools, digital transformation, whatever way you refer to it, is definitely an era of distraction, and one concern of course is that we forget how to bring, or maybe don't even know any longer, this silent ethics into our day-to-day lives.

WENDELL WALLACH: So let me talk a little bit about the silent ethics before we jump to what does it mean within our digital age, because I think it will be a little helpful for understanding how I look at, let's say, the introspective spiritual explosion that took place in the 1960s and 1970s largely in the United States and Europe, but really it filtered around the world.

It was this infatuation with mental states, and it was a recognition that a lot of the mental states or altered states of minds that people pursued, the likelihood of them occurring was inversely related to the amount of thinking you were engaged in. This pervaded sports medicine, it pervaded artistic activity, it pervaded all kinds of things that people could be doing, and it gave birth to psychological research around flow. It gave birth to sports psychology in terms of how people could have endurance when they did marathons or how they could anticipate what they would encounter on a ski slope. It was all about how you in effect got your body-mind into a relatively quiet space and were not caught up in distraction created by thinking.

But one of the first insights many people had was that if you repress your thinking, that does not get you to your goal, that if flow or inner silence are created and creativity is to occur, they are byproducts from attending to what your thought patterns are giving rise to. But when those states do occur they are embodiments of being in the right place in the right way and at the right time. They come with this feeling that "This is where I am meant to be in this moment." To me that is kind of an ethical expression of what it means to be in a silent state. But as I say, you can't get to those states directly. Those states largely are achieved through attending to the random thoughts that are occurring, the mental stress that is occurring.

So, in effect, what I am saying is that you attend to the mental stress, but you attend to it in a way where you choose the form of intention or the actions that lead to a relative quieting of the mind if not an absolute quieting of the mind. That is why this becomes a silent ethic. It seems like an inverted way of thinking, but I do think we do have that affective sensitivity to know whether the various options we consider make us more or less quiet.

Let me tie that in for a moment to research on artificial intelligence because one of the fathers of research on artificial intelligence was Herb Simon, and Herb Simon received the Nobel Prize in Economics, and he received that Nobel Prize in Economics for a theory where he pointed out how people were actually limited on how many options they could consider, and they would select the option that he said was most "satisficing." They may not select the perfect option when they were working their way through a problem, but the option they would take had this affective quality to it, that it was satisficing. We like to think "satisfying" is really the world, but he was trying to say something a little bit more subtle.

My point is that we are always participating in an activity of trying to find those mental states which are most engaging for us, and I would say that those are also the ones that allow us to be most present and engaged with our environment, and that engagement again is inversely related to the extent to which our mind-body is dominated by attention to the inner thoughts and thinking that are going on.

ANJA KASPERSEN: A lot of the work that happened in the AI research fields actually started ten years after you were born, in Dartmouth.

WENDELL WALLACH: Right.

ANJA KASPERSEN: Since we heard a lot about the winters and summers of AI, the various milestones of AI, in your view what were the big milestones? Can you say something about the history, 1956 onwards, and also where do you see us going?

WENDELL WALLACH: Actually the first milestone predates 1956, and it is 1950. It is when Alan Turing writes an essay which has become one of the "golden oldies" of cognitive science and AI research. "Golden oldies," as many of you know, is a term attributed to records which sold millions of copies. There are in nearly every field of research a few articles that get cited tens of thousands if not hundreds of thousands of times, and I like to refer to these as "golden oldies."

Perhaps the greatest golden oldie in the field that is now known as artificial intelligence is this paper that Alan Turing wrote in which he asked the question, "Can machines think, and how would we know whether they think?" This is actually a wonderful paper that he wrote. He came up with what is known as the "Imitation Game." We now think of it as the Turing Test, where an observer who does not know whether this is a human or a machine has to guess from its output whether it's dealing with human or machine, and Turing basically said if the human expert can't guess accurately, it's for all practical purposes intelligent.

The Turing Test is still around. It has been criticized for many years, but I think it laid the popular foundation for excitement in artificial intelligence, not that there weren't precursor systems, including the Enigma machine that Turing had developed to decipher the Nazi code during the Second World War that was seminal in this regard, but I think he also created excitement around the possibility of thinking machines and machines that would have this true intelligence that was dubbed "artificial intelligence" in this invitation to a workshop at Dartmouth University in the summer of 1956. That is where the term was coined. It is an unfortunate term because it has made machine intelligence largely about a comparison to human intelligence, where, as noted in many respects, we have forms of machine intelligence now that really are not quite comparable or are superior to what nearly all but perhaps a few humans can engage in.

So that was the first moment of excitement. It predicted rapid progress, so rapid that they expected that we would have artificially intelligent machines that would exceed human intelligence within a decade and that would beat a grandmaster at chess within a decade. Furthermore, they believed that vision was such an easy problem to conquer that they assigned it to one graduate student for the summer.

As we all now know, it took roughly 36 years before Deep Blue 2 beat Garry Kasparov at chess, and we still don't have machines that are accurate in natural language, and we still don't have machines with visual acuity equivalent to that that humans have. In some respects, yes, we have machines that have visual acuity that we don't have, but they don't have the diversity of visual capabilities nor the ability to attach images to words as readily as we can. So that was the first winter. We have had a succession of those periods.

For many years there was an approach to artificial intelligence that was called "good old-fashioned AI." I am not going to go into explaining what each of these stages was. It got superseded for a while by an approach called "neural networks," which was less around building a machine around one central processing unit but having chips, each of which more or less functioned as a crude neuron. There were great hopes that if we emulated the human brain that we would then get outputs equivalent to human capabilities, but that enthusiasm also died on the vine, and we had a long winter after the failure of neural networks.

We are now in an interesting period that was set off by what is often referred to as the "deep learning revolution," which is actually neural networks just come back to life. What happened was that a combination of computing power and new strategies yielded neural networks that were able to look at massive quantities of data in a relatively short period of time and engage in categorizations of relationships within those datas and outputs of relationships within those large bodies of data that might not be recognized by a human researcher.

That is what has led to this machine learning explosion. It is not just deep learning approaches, but there is this feeling that right now we are in the midst of an explosion of applications through not only deep learning but reinforcement learning and other approaches, and one of the debates is whether or not these systems themselves are plateauing and that we need a very different approach or whether, even though we are getting all of these productivity gains and applications out of existing machine-learning approaches, we are also plateauing in terms of whether or not we have real breakthroughs going forward.

ANJA KASPERSEN: How much of this plateauing would you say is because we are stuck on this track of trying to emulate and replicate? We are not getting anywhere closer to understanding, which is also a point our colleagues are making in this field, but is that a weakness to continue this research, that we are fooling ourselves thinking that being able to correlate data better, so algorithmic processes, somehow gives us an understanding, we are trying too hard to emulate the human mind, and that is not where the real breakthroughs will lie?

WENDELL WALLACH: That is a great question, and I think it is the critical question right now. I am of the mind that we have a fundamental problem and that we are actually focused in the wrong way on what artificial intelligence can or should do or what we should be trying to affect.

But let's come back to this question of understanding, semantic understanding, understanding the meaning of words, of emotions, and of the artifacts we deal with in life.

ANJA KASPERSEN: Can we code meaning?

WENDELL WALLACH: I am of those who believe it is highly unlikely, at least with the strategies we have for artificial intelligence. Again, this brings us to a core element of human intelligences. We are meaning makers. We impute meaning to the relationships and the artifacts and the symbols and the objects of our lives. Sometimes the meanings we impute are fake news. They are false. But that doesn't alter the fact that this activity of creating meaning out of our lives and imputing meaning is central to human culture.

Let me come back on this point about meaning just for a moment because it brings us to probably what was the number-two golden oldie in the realm of literature about artificial intelligence, and that was a critique of the Turing Test made by the philosopher John Searle called "the Chinese room." Searle's point was pretty simple. He said: "You know, you run the Turing Test with me in one room and a machine in the other, and you give me the same books, data, documents about Chinese that the machine has, and then you have a Chinese speaker pose questions to us." He said: "If you took out the factor of time, I might give the same answer that the machine gave, but that doesn't mean that I understand Chinese. I would just be going through a bunch of mechanical processes. I would just be going through a bunch of syntactical processes, but I would not understand Chinese."

Interestingly enough, the Chinese room argument has generated millions of words of refutation from cognitive scientists, particularly researchers in AI. But I think what Searle was saying was pretty basic, that with the platforms we have it is not clear that we are going to get semantic understanding out of them. While it is fascinating to see how far we can get with digital platforms—I'm fascinating by this research agenda going on—I think the desire to get these kinds of capabilities misses where we can get the greatest advantage out of artificial intelligence and where we can ensure a more humanly meaningful future that is not about pitting machines against humans but is about bringing together teams of those with different forms of intelligences to work through the serious problems we need to direct our attention to and that can't wait for artificial general intelligence.

We can't wait for artificial general intelligence to solve global warming. We can't wait for artificial general intelligence to solve the distribution crisis that we are in the midst of and which threatens to lead to revolutionary or authoritarian movements where people no longer have confidence that their governments will solve their problems or create a world that will meet their needs.

We are at a different point where we already have many forms of human intelligence, many forms of human expertise, and it would be nice to bring them to the table together with what we can pump from the forms of artificial intelligence that we have today or we will have in the near future to collectively solve our problems and to engage in a kind of participatory collective intelligence as opposed to a focus on individual intelligence, either that of an individual human being or that of a machine.

ANJA KASPERSEN: And in some ways where we are at now it is not at all clear to us both how to interpret meaning or who will be the true alchemists of meaning going forward. The answer to that is definitely not going to rely on algorithms.

WENDELL WALLACH: It is not going to rely on the algorithms, and though it is a fascinating problem how far we can get toward a rich intelligence through digital platforms it is not the most relevant challenge we have today. It is particularly attractive to, what shall I say, tech optimists, whose who hope they can upload themselves into systems or that maybe they will be the geniuses of the future because they will be the Svengalis who have control of those machines or at least have a relationship to those machines because they understand them, but that it is again another elitist dream, an elitist adventure, about privileged people wanting to be in a privileged situation, and it does not speak to the collective challenges that we, humanity, as a whole have right now.

There is always a notion among trans-humanists or among those who want to live eternally that if these technologies are actually revealed they can be spread among all of humanity, but that's probably not what is going to happen. It is going to be largely about creating new elites or different elites from the ones we have now. The techno elites are the wealthy who get to upload their brains, presuming that that is really a desirable thing to do or to radically extend their lives, but that has nothing to do with equality. That has to do with elitism.

ANJA KASPERSEN: You often speak of "trans maps" to mitigate against "technological solutionism," also a phrase you have used, to help us discern the actual from the fanciful, the real from the unrealizable in the ever-expanding map of these new technologies or rather new uses of sometimes problematic technologies.

WENDELL WALLACH: Let me define a few of those terms for those who are listening. What I mean by trans maps are transdisciplinary maps that roam across broad areas of understanding and intelligence. Unfortunately, with the start of assembly lines, humanity moved down a road of creating specialists and rewarding those with specialized skills and not necessarily creating adequate reward structures for those who had a very diverse and rich set of skills. To be sure, some of those with a diverse, rich set of skills came up with some of our greatest inventions, but today we are still in an educational system that portends to want to have transdisciplinary—it is usually put in the term "multidisciplinary"—scholarship, making connections between one or two fields of research.

But I think the problem we have today is that we fail to have many minds that look at the challenges are confronting comprehensively, and we turn to experts who often have very narrow visions. It is not that their narrow visions are so narrow that they cannot be making great progress in various fields. This problem that we are confronting today is that we are not creating that cadre of individuals who feel comfortable in going from field to field and who have the ability to grasp in a transdisciplinary manner and in a comprehensive manner the challenges that we have to direct our attention to today.

What I mean by trans maps are creating those primers, those introductions that make people aware of large territories of understanding so they can at least begin to engage in the conversation, so that they don't necessarily have to defer to the expert who has a specialized knowledge, but they know which questions to ask that expert to reveal what are the biases or the implicit assumptions within that expert's perspective.

It is not only that we need individuals with those transdisciplinary understandings but we need to have teams of those individuals sitting down at tables together, where each member of the team may have the specialties that they understand that others do not understand so that we can begin to address: How are we going to deal with global warming? How are we going to put in place governance mechanisms to ensure that the development of AI and biotech are in the interests of humanity as a whole and don't serve just the needs of the elites who expect to get great rewards from them? For me, that is a little bit of what I mean by "participatory intelligence," by "collective intelligence."

ANJA KASPERSEN: So proactively seeking what you don't know and being comfortable with it.

WENDELL WALLACH: Be comfortable with it, to know that that is the adventure. And that requires a truly comprehensive participatory intelligence. That requires the recognition that there are forms of intelligences that exist within indigenous communities, within people of various genders, people of various experiential backgrounds, and not just those who have had privileges.

ANJA KASPERSEN: You have opined in the past, Wendell, that navigating the future technological possibilities is a difficult venture, and it begins with learning to ask the right questions.

WENDELL WALLACH: I think first of all we need to create incentives for people to develop those abilities. I have been in the world of the ethics of emerging technologies for at least two decades now. There were no incentives to be in that world. There were no reward systems. Luckily now we are getting some research centers around AI ethics, around data ethics, and around biotech ethics, so at least that is a step forward.

But I would love to see the creation of incentives that gave young people who are fascinated by these broader challenges opportunities to sharpen those skills. That's going to require scholars sitting down and thinking through: How do we create a cadre of experts that are more fluid in their transdisciplinarity, more fluid in their ability to float from engineering challenges to governance concerns, without creating a lot of superficial thinkers? I think that's not a hard problem to solve, but one has to recognize that it's an important problem to solve.

Most of us have sort of been trained in a way where we get infatuated with our own ideas, get trapped in our own ruts, and lose the capacity for self-awareness or when we are in those ruts and what questions might jolt us out of them.

ANJA KASPERSEN: Twenty-first century technology is changing our understanding of war in deeply disturbing ways, so should we worry less about dystopian super-intelligent or sentient machines or, rather, that we fail to perceive some implication in the goals we set for it or power paradigms we feed it with, perpetuating and embedding power structures that will not just change the character of warfare but the nature of war itself and accepted beliefs about democracies with it?

WENDELL WALLACH: What's happening today is this explosion of breakthroughs, and this explosion of breakthroughs has set a pathway that nobody has any control over. This is a dangerous state of affairs, and this is a state of affairs, I think, that has given rise to disquiet among many communities, within many communities, and that disquiet in turn may also be feeding this infatuation with authoritarian regimes, the destabilization of societies, the feeling among many people that it's not clear at all that this technological juggernaut is really in my interest or in the interest of my family. We'll be creating either a revolutionary condition or we will have to build more and more powerful surveillance technologies to control people's behavior.

ANJA KASPERSEN: There has been a mushrooming of new initiatives and guidelines and frameworks and things to guide our way through this jungle of problems and tradeoffs and what have you. Do you see this as a positive development? Do you see something good coming from it? Do we see change in behavior? Do we see a greater understanding of tangible changes, tangible things that need to happen?

WENDELL WALLACH: I came out of the womb as somebody for whom the cup is half-full and I continue to always feel the cup is half-full and that even if we have slid backwards there will be inflection points, there will be opportunities that we can shape the future in a positive way. Even if I'm wrong, I will probably go to my grave with that belief.

I wrote A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control as what I thought was a primer on emerging technologies that would in a certain sense impart enough information to the intelligent reader that they would feel free to participate in these conversations about where we're going with emerging technologies and how we can actually manage, and even govern them.

But I was surprised to find that I had underscored so many things that could go wrong that many readers gave up after a few chapters because I scared the bejesus out of them. That has always left this kind of background recognition that when you look at the dark side of technology, there are a lot of things going wrong and it's not at all given that humanity will find its way through this rather difficult next couple of decades.

Now, I clearly feel there is a pathway through, but I also get deeply concerned when I see irrational forces spreading lies and the growth of authoritarian regimes and activities by privileged elites that expand their privileges often at the expense of others.

One of the forms of intelligence I think we need to cultivate, as I said earlier, was self-understanding. Interestingly enough, AI may even help us in that regard. You see AI algorithms today are being used by the social media companies to manipulate our behavior, to push our buttons and capture our attention. I would like to know what those buttons were. What's going on in the algorithm that seems to know me, as Yuval Noah Harari said, "better than I know myself"? If I understood my buttons a little bit better, perhaps I would catch when I was being manipulated or I would be more circumspect when I pass over my information.

It's those kinds of ways of looking at traditional problems and ferreting out whether there are different pathways that, though we might move forward, we have to address.

But I'm not being naïve here. We have put a lot of systems in place that have momentum. There's a lot of things that can go wrong. There's respect in which humans are becoming dumber and their stupidity is being nurtured—and not just politically, but with these algorithms manipulating our behavior—that in my darker moments make me question whether we're going to make it.

So what am I doing today? I'm just trying to give the next generation a chance. Will they succeed? Will they prevail? The pathways are there, just as there are pathways to decarbonize energy creation and defuse global climate change, but whether we will take those pathways, that's not at all clear.

ANJA KASPERSEN: Fundamentally this technological juggernaut, as you refer to them as, is challenging everything we have at our disposal in terms of mitigating harm. One area where AI is being used a lot, including in the military domain and the security domain, is decision-making.

WENDELL WALLACH: Yes.

ANJA KASPERSEN: In Moral Machines you and Colin Allen, your coauthor, posited that "top-down ethical theorizing is computationally unworkable for real-time decisions…[T]he prospect of reducing ethics to a logically consistent principle or set of laws is suspect, given the complex intuitions people have about right and wrong."

This is a really interesting statement in your book, especially as AI is being immersed into decision-making structures at the strategic level, operational level, tactical level, across all domains, fundamentally challenging also what we think about as mission command in more military-based structures.

Would you be able to take us behind this statement and share with us some reflections that went into your book around this issue and also now, observing many years later, how AI, sometimes without the AI fluency to guide the leadership on this, is being immersed into all levels of decision making?

WENDELL WALLACH: There's a number of different levels to that question, a number of things implicit in that question.

Let me first give a very quick synopsis of the book, which asks this question about whether we can implement moral decision-making. It is said when you look at ethical theory, there are these two basic approaches.

One is you apply some broad ethical theory. That broad ethical theory can be utilitarianism, it can be the Ten Commandments, it can be Kant's categorical imperative, it can be India's Yama/Niyama, it can be Asimov's Three Laws. But what you have just read there are the limitations we see in those approaches, and most people understand that there are disagreements from those different approaches in terms of what's right and wrong, which creates this rather daunting problem that ethics may not be algorithmically reducible. It is one of the reasons why everybody talks about ethics but people are very dismissive of ethicists and the ways of looking at ethical problems because they don't necessarily give you an absolute right or wrong or an algorithm that brings you to that decision.

The other thing that ethical theory points to is the prospect of learning machines that, like a child, learn about what's right and wrong. Those are what we refer to as bottom-up approaches.

In Moral Machines Colin and I looked at top-down and bottom-up approaches, we looked at their strengths and weaknesses, and we decided in the end that if there is going to be a way forward it's going to require some amalgamation, some hybrid of the two, which might be applied to approaches such as virtue ethics, such as creating virtuous machines.

We also went into a very different area. We went into an area that we call "beyond reason" and what we call "super-rational faculties."

One of the most interesting things about Moral Machines was that it was perhaps one of the first times that anyone had looked comprehensively at human moral decision-making. When we look at human moral decision-making we take a lot of things for granted—we take human emotions for granted; we take being embodied creatures in a physical universe for granted; we take consciousness for granted; we take what's called a theory of mind, the ability to recognize that what's in somebody else's mind is different than what's in yours, and trying to deduce what's in the mind so you can cooperate together and coordinate your activities together.

These are all capabilities that seldom entered into the ethics conversation because all humans have them, but when you're building a machine suddenly you have to say, "Well, is the ability to reason about ethics enough or do you also need these super-rational/supra-rational faculties; and, if so, what's their function? What's the function of being conscious? Can a machine that doesn't have consciousness get it right; can it figure out what to do?"

Well, yes it can in some situations that are relatively simple—it's okay, if you don't have to hurt that creature, don't hurt it—but not in more sophisticated challenges, and it may not even recognize that it's an ethically significant situation.

From that perspective Moral Machines perhaps introduced people to thinking about ethics in a more comprehensive way than they had beforehand.

But I'm not somebody who believes that ethics is about algorithms, or even getting it right all the time. I think ethics is a bunch of different languages that help underscore what considerations we would like to have factored into our choices and actions, and people are not going to agree on ethics because they prioritize these elements differently. Some person considers that purity is of great significance, where others say, "I could care less about it."

So for me, and why I got into things like the silent ethic was just asking questions such as: What really grounds our ethics? Where are we going with it?

I think one of the problems that has occurred now is that we are in an evolutionary understanding of ethics, we are in an evolutionary appreciation that our moral psychology, our states of mind, are also factors in ethical decision making but we don't know what to do with all of that yet. In other words, we need a degree of evolution, we need new stages of learning, in order to come up with appropriate ethical disciplines for dealing with the biotech/digital/tech revolution.

ANJA KASPERSEN: Is this lack of behavioral insights and the increasing power of algorithmic data technologies and their sophistication, including the most sacrosanct of all types of data in my view, which is the data about our biological selves—and that's one of the big game changers during this pandemic that we are tapping into that in ways that were unheard of just a year ago—and it seems in some ways that's the perfect storm?

WENDELL WALLACH: I don't have any great insights on perfect storms once they occur, but I do understand that we are navigating uncertainties that can take us into perfect storms. Then, if we have these perfect storms, what actually survives? Is it humanity that survives? Or is it some trans-human beings that are wired into computers that survive? Or are there a bunch of people still around but they are basically treated like machines that they manipulate to perform particular tasks or be good consumers? I don't know.

I'm sort of left back here at this point where what we're trying to do is navigate uncertainty. The tools we have for navigating uncertainty actually happen to be ethics. The ethical languages are largely about illuminating those elements of our lives that we cannot empirically quantify.

It may be that if we knew everything it would be black-and-white what to do in every situation that arose, but the fact is we don't know everything and we are creating greater and greater uncertainties by the day.

The role of AI ethics is to at least set parameters on what is acceptable and what is not acceptable and to reinforce those parameters in a way that at least can provide some degree of an insurance, some degree of trust, and assurance that the technological revolution will unfold in a humanly meaningful way, in a way that is respectful of human needs. That's not a given. That's something we're going to have to fight for.

The given is that it is unfolding right now in ways that exacerbate inequalities and are creating new inequities and what we need to do is charter the pathways that actually enhance the ability of humanity as a whole to prevail, to flourish, to develop self-understanding, to develop moral intelligence, and to create meaningful lives for ourselves and the next generations.

ANJA KASPERSEN: When we first met you described yourself as a scholar of natural philosophy. It was a really good reminder for me that as we try to understand science and bring everything together we have to use what we learn about the physical world to inform our views of the world as a whole.

Could you just bring us into this way of seeing the world of science based on your different incarnations, if you may, professionally, spiritually, personally?

WENDELL WALLACH: Thank you. That's a wonderful question and one that's great fun to pontificate about, so let me try for a moment.

Natural philosophy goes back to the dawn of the Enlightenment Era. We lose track of the fact that philosophy and science were one discipline at one time and that Descartes and Bacon, those who created the modern world, modernity, didn't call themselves scientists or philosophers, they were natural philosophers, as were those performing some of the first research experiments.

But we have moved into a world where philosophy got alienated from science and we had a lot of scholarship around dead philosophers and analytical schools coming into being that were largely divorced from what the scientific method was revealing about, human capabilities and human nature, and what was born or reborn in the cognate of science industry were natural philosophers, or naturalistic philosophers is sometimes the term that's used.

Those are largely people whose philosophy focuses specifically on science and specifically about what we are learning from science, but also what the science is showing and not showing. There were a lot of conclusions being jumped. You know mystics love the uncertainty principle in quantum mechanics. There are other examples of that.

When mirror neurons came along there were all kinds of speculations about what they did and did not show, and my colleague, Colin Allen, who I had written quite a bit with, wrote a paper saying the research doesn't necessarily show what people are presuming it showed. He is a leading naturalistic philosopher looking at consciousness studies and cognitive science and looking at the research and trying to help people interpolate what the sciences actually revealed and what perhaps are just assumptions that we are imputing to it.

But let me come back for a moment to my own evolution because my father was a wonderful physician and he loved the history of science, he loved science very broadly, he loved history and politics and all kinds of things. He wanted his children to become scientists. None of the three of us became scientists. There was a while when I thought that I wanted to go into astronomy and, interestingly enough, I was fascinated by genomics in its mid-1960s incarnation, but the civil rights movement and the anti-war movement distracted me from focusing on science and they got me much more involved in social science, and so that's what I majored in in graduate school.

Now, the interesting point about my upbringing is we were brought up with many of the obvious prejudices around race, even though there were no blacks in Northwestern Connecticut when I grew up. We weren't necessarily brought up with credo prejudices, even though they were in our family, but in a way that I didn't understand. My parents were of Jewish ancestry but they had escaped from Germany with their lives and they had never identified with Judaism, so they didn't necessarily convey to me that I was of Jewish ancestry.

But the one bias we were brought up with, something I didn't fully understand until I was in my twenties—my sister finds that impossible to believe, but that is true—but we were brought up with one particular bias, and that was elitism, that we were somehow special and that we belonged to a cultural elite. That perhaps is the strongest, most unconscious bias I'd had throughout my life.

Even when I got involved in meditation and spiritual pursuits, like many of those around me, I wanted to be special, and if I wasn't going to find miraculous powers at least I could get enlightened, whatever that meant. It took me years to understand it meant many different things and that there are many different enlightenments. But again the focus was on the individual specialness. And even for scientists the idea was still very much about becoming the "great man," the Einstein of your age. And again in social science it was the same way. There was all this focus on the individual and what the individual can achieve and perform.

I think what has been fascinating to me as I look back at my life and the many incarnations of that life is that I was led through experiences that carved out a rich sensibility and the product of that sensibility is a little of what I've tried to share today, that none of us really exists as an individual fully unto ourselves, that we are the product of culture, of interactions.

Elitism has always got the same goal, that I have a superior role or that I'm seen as superior, that it is my viewpoint that is of importance, and in its worst forms it always says "and you could never understand what I understand." To me that's what's happening with digital elitism. We see it over and over again. We've seen it as spiritual elitism, we've seen it as intellectual elitism, we see it as industrial elitism, we see it as financial elitism, cultural elitism. It comes over and over in every form, but its mask is always that the elitist individual is special, is not necessarily understood by others, not necessarily appreciated.

So when I see it in digital elitism, it usually takes the form that "You could never understand these algorithms the way I understand them. You can't understand their power or their intelligence." When somebody says that, I start to quake a little bit because if you are now going to make humanity dependent on technologies that only a digital elite understands, how do we know whether to trust the digital elites?

It doesn't even mean that you are an untrustworthy individual. It may be that you are perfectly sincere, and you may even be correct that I can't understand some of the technologies that you are creating and deploying, even if I tried. But that's nevertheless a pretty dangerous road for us to go down. And it presumes that not only do you have this specialized digital expertise, but I have to trust that you have the intelligence to understand how they will impact society, what the tradeoffs will be in their implementation, and that you have the intelligence to control them or design them in a way where they will not do anything harmful.

The one thing I've learned is nobody has that capacity unto themselves. That's true of all of us. And few of us have enough self-understanding to even recognize that we have blind spots, let alone recognize that we don't even know what our blind spots are often.

ANJA KASPERSEN: And the problem is of course that an algorithm would be able to pick up on that.

WENDELL WALLACH: The problem is that an algorithm would be able to pick up on that to the extent that it had to manipulate my attention to get me engaged in buying whatever piece of junk it wanted me to buy or to buy into whatever ideology it wanted me to buy into or to buy into a technological vision of the future that enhanced my advantages but had tradeoffs which are highly disadvantageous to others.

ANJA KASPERSEN: It is also the investment made by people of power, people of resources, into making, as you've been saying, trustworthy unfoldment of the digital and bioethics age. We need to invest as much in the security of the system as we do in the systems themselves.

WENDELL WALLACH: Exactly. We are already in an uncertain world, we are walking into an uncertain future, and we are losing track of what we have to do to ensure that it is worthy of our trust, that at least the processes are worthy of our trust even though we may not necessarily know what the results will always be, but that the processes are an order in a way that the likelihood that they lead to a better future has a very, very high probability.

I think that is lost at the moment. It's not that there are not millions in people in AI ethics and in research ethics more broadly, that there aren't well-intentioned researchers—I mean nearly everybody who is engaged in scientific research believes that the research they are working on will be for the betterment of humanity. So it's not that there's a lack of good intentions out there, but we have failed to execute those intentions in a way where we are leaving our compatriots, our fellow citizens, with the feeling that they can trust the unfoldment of this technological future as being in their interest and asserting the interest of their children.

ANJA KASPERSEN: Thank you, Wendell. As always, this has been a wonderful conversation.

This podcast is supported by Carnegie Council for Ethics in International Affairs, and a big thank-you to the team in the studio helping us produce this podcast.

WENDELL WALLACH: Thank you, Anja.

You may also like

APR 26, 2022 Podcast

The Promise & Peril of Brain Machine Interfaces, with Ricardo Chavarriaga

In this "Artificial Intelligence & Equality" podcast, Senior Fellow Anja Kaspersen talks with Dr. Ricardo Chavarriaga about the promise and peril of brain-machine interfaces and cognitive ...

APR 5, 2022 Podcast

AI & Collective Sense-Making Processes, with Katherine Milligan

In this "Artificial Intelligence & Equality" podcast Senior Fellow Anja Kaspersen and Katherine Milligan, director of the Collective Change Lab, explore what we can learn from ...

JAN 5, 2022 Podcast

"That Wasn't My Intent": Re-envisioning Ethics in the Information Age, with Shannon Vallor

In this episode of the "Artificial Intelligence & Equality" podcast, Senior Fellow Wendell Wallach sits down with Professor Shannon Vallor to discuss how to reenvision ethics ...