Moral Tribes: Emotion, Reason, and the Gap Between Us and Them

Nov 6, 2013

TV Show

Highlights

How do human beings make moral decisions? Sometimes we go with our emotions and "think fast" and sometimes we use reason and "think slow." Neuroscientist Joshua Greene's research shows that for problems within small groups, its best to think fast. But for global problems between larger groups, we need to learn to think slow.

Introduction

JOANNE MYERS: Good morning. I'm Joanne Myers, and on behalf of the Carnegie Council, I would like to thank you for joining us. I promise that our discussion today will provide no leftover tricks from last night, but we do have some treats that we intend to nourish your brain with.

It's a pleasure to welcome Joshua Greene to this Public Affairs program. Josh has become well known for his pioneering work in combining cognitive neuroscience with philosophy and experimental psychology. He is here to discuss his work, which he has masterly synthesized into a practical philosophy that he hopes will solve our biggest problems. His book is Moral Tribes: Emotion, Reason, and the Gap Between Us and Them.

As director of the Moral Cognition Lab at Harvard University's Department of Psychology, Josh has said that the purpose of his research is to understand how moral judgments are made and shaped, whether by automatic processes such as emotional gut reactions or by controlled cognitive processes such as reasoning and self-control. In essence, what this means is that our speaker is creating quite a name for himself in researching how people's brains actually work when they are making decisions about moral problems, when they are making judgments about what is right, what is wrong.

Moral problems are not limited to any particular kind of situation nor to a special domain. They surround us all the time and come into play when people interact with others. Challenges arise when people of different races, religions, ethnic groups, and nationalities share similar values, but have different perspectives when making moral decisions. Given this dilemma, how, then, in this globalized world, can we improve our prospects for peace and prosperity? How can we gain a better understanding of them, the other?

While some progress has been made in getting individuals within a group to start cooperating, for some time now our speaker has been thinking about how to foster cooperation between groups. In reading Moral Tribes, Josh tells us that a good place to start learning about morals is to understand how these decisions are made, and the way to end is having a set of maxims to navigate this modern moral terrain. Along the way, we learn about the structure of moral problems and how they differ from the problems our brains were originally meant to solve and how different kinds of thinking are suited to solving different kinds of problems. In the end, Josh suggests a common moral currency that can serve as a basis for cooperation between people who are otherwise deeply divided on matters of morality.

By now, you've probably realized that Josh is prepared to give us just what is needed—a global ethic, if you will—so that we will be able to accomplish together what we sometimes cannot accomplish alone.

Please join me in welcoming a very interesting thinker, our speaker Joshua Greene.

Remarks

JOSHUA GREENE: Thank you so much for having me. It really is a wonderful honor to be here.

What I'm going to do is try to get across three ideas. One is what I call the tragedy of common-sense morality, the other is the idea of morality fast and slow, and then, finally, the idea of common currency.

So starting with the tragedy of common-sense morality, the tragedy of common-sense morality begins with a familiar parable that I think gets to the heart of what morality is really all about. This is Garrett Hardin's "tragedy of the commons," which I'm sure many of you are familiar with, but I'll recount the basic story. Garrett Hardin was an ecologist. He was especially worried about the problem of overpopulation, but his parable really applies to almost any major social problem.

It goes like this: you have a bunch of herders who are raising their sheep on a common pasture. These are very rational herders, and they think to themselves, "Should I add another animal to my herd?" And they say, "Well, when I get to market, if I have another animal, that's one more animal I can sell. That sounds good. That's the benefit. What's the cost? Well, the cost is not much. For me, it's grazing. Animals graze on this common pasture. So I gain something substantial. I don't pay very much. I'll add another animal."

Every herder has the same thought, and, as a result, they grow and grow and grow their herds. Then it gets to a point where there are too many animals for the commons to support. As a result, all the animals die and everybody ends up being worse off.

This is the fundamental social dilemma—that is, when there's a tension between what's good for me as an individual and what's good for us collectively. When our interests are perfectly aligned, there's no problem. When our interests are perfectly opposed, there's a problem, but there's not really anything that we can do about it. Life gets interesting, and life is most interesting most of the time when there's a tension between the me perspective and the us perspective, but it's not a zero-sum game.

What are the herders to do? Well, the answer to that is familiar. We need something like morality, perhaps morality codified in the form of law. We say, "Okay, you have to limit your herds for the greater good." But it's not quite that simple.

There are a lot of questions to be resolved when you try to think about this in more concrete terms. Do big families get more sheep? Is that fair? The small families might not like that so much. Do you get the same number of sheep per family? The big families might think that that's not exactly fair. And what if someone takes one of your animals? Can you defend your animals with an assault weapon? What happens if one of your animals gets sick? Are we going to have collective health insurance for our herds?

There are a lot of different ways in which the problem of cooperation can be solved. Let me illustrate this point further with my own personal sequel to Hardin's parable.

Imagine that you have this large forest, and around this very big forest you have a lot of different tribes. They are all raising their herds, but they raise their herds in different ways. They live in different ways. Over here you have, let's say, your individualist tribe, where they say, "Enough of this common pasture. We're just going to privatize it. We're going to divide up the commons. Everybody gets their own plot of land." That's one way of solving the problem. Now everyone is on their own, and the cooperation consists in just everybody staying out of each other's territory. That can work, perhaps.

Another way—the opposite extreme—is to have your communist tribe, where they say, "We're not only going to have a common pasture, we're going to just have one common herd." That perhaps is one way of solving the problem.

Tribes can differ in a lot of other ways. What happens when someone, again, encroaches on your herd? Do you have a strict code of honor where you have to go after and kill them and their family if someone does something nasty to one of your sheep, or do you have a more harmonious attitude, where you say, you know, we live and let live?

Different tribes can have different what I call proper nouns—different texts, different leaders, different gods that they pray to. No two tribes are going to be praying to the same god, at least if these are tribes that evolved separately. So you can have a lot of different terms of cooperation.

So now we have all these tribes cooperating internally under different terms around this forest. Then there's a big forest fire and the whole forest burns down. The rains come, and suddenly there's this lovely new pasture in the middle. All the tribes say, "Hey, that looks nice." They all move in. Now the question is, how are they going to live on those new pastures? We have individualists and collectivists. We have tribes that pray to this god versus that god and live by these rules versus those rules, and the ones who defend their honor fiercely and the ones who try to get along with more or less everybody.

Those new pastures? That's the modern world. The modern world is not just about a bunch of individuals trying to get together and be cooperative. It's about a bunch of groups that are separately moral, differently moral, separately cooperative, but differently cooperative, all trying to live together in the same space.

What I told you are a couple of parables. I see a lot of you nodding. You can see that this is connected very much to the world that we live in. Let me just highlight this with a couple of examples.

There were a couple of interesting moments late in 2011 in American politics. There was a Republican primary debate where Ron Paul, who was then running for the Republican nomination, was asked by CNN's Wolf Blitzer: "Suppose there's a guy who said, ‘I'm young. I'm healthy. I'm not going to buy health insurance,' and then something terrible happens. He falls into a coma. He's at the emergency room. He doesn't have health insurance. What should we do?"

His question for Ron Paul was, "Should we let him die?"

Paul, like a good politician, tried to dodge the question. He said, "Well, he should have bought health insurance."

Blitzer wouldn't let him go. He said, "Okay, he should have, we agree. But he didn't. Should we let him die?"

He didn't quite know what to say. He didn't want to come out and say, "Yes, let him die," but he didn't quite want to say, "Yes, we should take care of him."

There were people in the audience who had a much clearer answer about this. This is the Republican primary debate, and you heard voices from the audience saying, "Yeah, let him die."

What Paul said was his family and his friends and the churches should take care of him. Someone else should take care of him.

That was a very interesting response. He didn't quite say let him die, but he didn't quite say the government should pick up the tab and save his life if no one else is willing to.

Around the same time, Elizabeth Warren made her famous stump speech from the living room that went viral on YouTube. This is the speech in which she's saying we're all in this together, essentially: "You may think that you got rich on your own. You may think that you have realized the American dream. And you have, in a sense. But if you built a factory and it turned into a wonderful thing, you moved your goods to market on the roads that the rest of us paid for. You hired workers that the rest of us paid to educate. You were safe in your factory because you had fire forces and police forces that were keeping you safe. Part of the social contract is, you owe something back to society and to the next generation because of the benefits that you experienced. No one is in this alone. You may be focused on your own wonderful efforts and genius, but, whether you realize it or not, we're all in this together."

Now, you've got the individualist herders and you've got the collectivist herders. Other issues are much farther from abstract political philosophy and closer, perhaps, to religion. As you know, we have great disagreement about, for example, gay marriage, great disagreement about what to do about abortion. A lot of people's religious convictions—ultimately, religions at least begin as tribal institutions—they are appealing to the culture, to the moral norms of their tribe to adjudicate these problems.

Two problems. There's the basic cooperation problem that Hardin identified—getting individuals to form a cooperative group. You can think of that as me versus us. Then there's the problem of us versus them, or the problem of "usses," the modern moral problem of forming a larger group, where it's not about selfishness versus morality, although that's how it may appear to be; what it's really about is different moral ideals, different moral values, competing against each other.

So the modern moral problem is not, how do you stop people from being bad; how you stop people from being immoral? That's a problem, too. The modern moral problem is, how do you deal with a bunch of different moralities with people all trying to live together? And, of course, with people having all of the self-serving biases that you see at the individual level—pulling for me—you see it at the higher level, pulling for us.

Changing gears to the interior of the mind for the moment, morality fast and slow: My favorite analogy for this—and many of you are familiar with this idea, presumably from books by Daniel Kahneman and Dan Ariely [Editor's note: Check out Dan Ariely's 2012 interview at Carnegie Council], the idea that we have gut reactions and we also have more deliberate thinking—my favorite analogy for this is the modern digital SLR camera. If you've got a camera like this, you've got your automatic settings, like landscape and portrait, click, point-and-shoot—it takes care of everything — and you've got your manual mode, where you can set all of the settings yourself.

Why do you have these two things? Well, it allows you essentially to navigate the tradeoff between efficiency and flexibility. The point-and-shoot settings are very efficient, and they're good for most things. But they're not very flexible. If you're doing something typical—a picture of a mountain from a mile away—that works great. But if you're doing something fancy, not so good.

The manual mode is great because it's flexible. You can do anything with it. But it's not very efficient. You have to know what you're doing. If you don't know what you're doing, you might make a mistake.

Having both is what enables you to toggle back and forth, and have efficiency when you need it and flexibility when you need it.

The human brain has essentially the same design. Our automatic settings are our gut reactions, which are essentially emotional responses to the situations that life puts before us. Our manual mode is our capacity for deliberate reasoning. You might say, "Which is better, automatic settings or manual mode?" But if you put it like that, you already know the answer. It's not that one is inherently better than the other. They are good at different things. Automatic settings are good when you have experience to draw on that has informed your gut reactions in a wise way, and manual mode is better when you're facing a situation that is fundamentally different.

Let me give you an illustration from my own research of morality fast and slow. How many of you have heard of the trolley problem? Some of you have, some of you haven't. For the uninitiated, I will initiate.

The original version of the trolley problem—one of them—goes like this: You have the switch case. The trolley is headed towards five people. They're going to die if you don't do anything. But you can hit a switch that will turn the trolley away from the five and on to the one. You ask people, "Is it okay to hit the switch to minimize the number of deaths?" And most people say yes.

Next case: The trolley is headed towards five people. This time you're on a footbridge over the tracks in between the oncoming train and the five people there. The only way you can save them is to push this big person who's standing next to you off of the bridge and onto the tracks. You kind of use him as a trolley stopper. He'll die, but the five people will live.

Now, I know you're all very smart, practical people, and what you're thinking is, "Wait a second, is he really going to be able to stop a trolley with a big guy? Is this really going to work? Why don't you just jump, yourself?" Put all that aside. This is your manual mode trying to wiggle out of the problem. Face the problem the way I'm giving it to you.

Most people say that it's not okay to push the guy off the footbridge and use him as a trolley stopper, even making all those assumptions that you don't want to make because you would rather wiggle out of the problem.

So what's going on here? A lot of research that I can only describe briefly suggests that it's a gut reaction, it's an emotional response—we have seen this in the brain, which I'll describe in a moment—that's making you say, "No, don't push that guy off the footbridge," and it's a more controlled, conscious cost-benefit analysis that's making you say that it's better to kill one to save five people. How do we know this? I'll give you one example.

If you look at people who have damage to a part of the brain called the ventromedial prefrontal cortex, which is essential for integrating emotional responses into decisions, those people are about five times more likely to say that it's okay to push the guy off the footbridge or do other harmful things in order to save more lives. People have done research giving people different kinds of psychoactive drugs and a whole lot of things—gut reaction making you say no, cost-benefit reasoning making you say yes.

So that's one example of fast and slow.

Let me take this fast-and-slow idea and apply it back to the original tragedy of the commons. This is research done by David Rand and myself and Martin Nowak. The laboratory version of the tragedy of the commons is an economic game called the public goods game. It goes like this: Let's say you have four people come into the lab. Everybody gets 10 dollars. Everybody has a choice. You can keep your money or you can put your money into a common pool. You put it into the common pool, the experimenters double it, and divide it equally among everybody.

If you're selfish, if you're me-ish, what you do is you keep your money. Why? Well, you get your 10 bucks plus your share of whatever went into the pool and got doubled. That's the way you maximize your personal take. If you're us-ish, if you want the group to do as well as possible, you put all of your money in. You get the biggest gains. It gets doubled, and everybody comes out ahead.

Interesting question. When we face this kind of dilemma, are we intuitively selfish, where we're thinking, "I want my money," and then we think, "Oh, no, no. I should be nice. I should put it in." That's the more deliberative response. Or is that we have a gut reaction that says, "Oh, I should be nice," but then we think, "Wait a second. I could really get the short end of the stick here."

So we did a lot of different things. I'll tell you just one version of this. We put people under time pressure. We had people make this decision, but you have to decide in 10 seconds. It turns out that putting people under time pressure makes them more cooperative. People put more money into the common pool when you ask them to decide quickly. If you force them to decide more slowly, they put less money into the common pool.

What this suggests is that, at least in some contexts, people actually have an intuitive response that says, be nice. You might say, "Oh, so we're hardwired to be good." Not true. I would say we are hardwired to have the capacity to be good in this way. But it varies a lot. People who report not trusting the people they interact with in their daily lives don't show this fast/slow kind of effect. As I'll explain in a moment, you see different patterns in different places around the world. So it's highly dependent on experience and on culture. But at least for some people, some of the time, you have a gut reaction that makes you more cooperative.

This is an example of gut reactions helping us solve the tragedy of the commons, the original problem of me versus us. What I would say is that, in general, what morality is—for the most part, basic morality is a suite of emotional responses that enable us to solve that problem. So we have positive emotions that make us want to help other people (I care about you), negative emotions that make us do this (I'd feel guilty or ashamed if I didn't), negative emotions that we apply to others (you will have my scorn and contempt if you keep your money instead of putting it into the pool), and we have positive emotions (you'll have my gratitude if you do what I consider to be the right thing).

What I would say is fast thinking, emotional thinking, for the most part, does a good job of getting individuals to get along. That's what we evolved to do. We evolved for life in small groups, and morality is essentially about allowing us to care about other people in that context.

But that's not the modern problem. The modern problem is us versus them. It's about cooperative groups with different values and different interests getting along with each other on those fertile new pastures. This is where our intuitions do not necessarily serve us well. Let me tell you about a fascinating set of experiments done by Benedikt Herrmann and colleagues.

They took this public goods game that I described, the tragedy of the commons, and they had people in different cities around the world play this game. They did it a little differently. In the experiment that I described, they just played it once. In their version, they did it over and over again, the same people. They also had the opportunity—you could punish people for the way they played in the last round. So if somebody is uncooperative, you can say, "All right, I pay a dollar, and as a result, you lose three dollars." This is a way of enforcing cooperation.

The beauty of this experiment is that everybody in these cities around the world is in exactly the same situation. They test them very carefully to make sure everybody understands the rules and everybody understands exactly what's going to happen. You get enormous differences in how people play and the amount of money that they walk away with. That is all due to culture and individual experience. There may be genetic differences, but that's not the obvious hypothesis, for reasons that I'll explain.

So what's going on? There are some places, like my hometown of Boston and Copenhagen and St. Gallen, where cooperation starts out high—people put their money right in—and it stays high. There are other places, like Seoul in South Korea, where cooperation starts out kind of in the middle and then people put in more money as people start to enforce it. You say, "Hey, you should be doing this," and people do it. The cooperation ramps up and, by the end, it looks just like Copenhagen, the capital of cooperation. Then there are other places, like Muscat and Athens, where cooperation starts out low and it stays low. It never ramps up.

They were kind of amazed by this. Why is that the case? They found this phenomenon that they didn't expect, which they call anti-social punishment. The people who didn't cooperate were punishing the cooperators for cooperating. I was baffled by this.

They interviewed people. They asked people, "What's going on?" They said, "You know, I don't like this whole game. I want everyone to know that I'm not going in for this, and none of your do-gooder little cooperation things."

It's an interesting question. It's not that people in Athens are uncooperative people. If you've been to Athens, there are nice people there. But different cultures have different attitudes towards cooperating with strangers, in some strange context. Bring a bunch of people in you don't know and you're asked to sort of cast your lot with them—in some cultures, that's perfectly acceptable. In other cultures, it's not. Cooperation is all about personal relationships.

That works very well within a tribe, but that's not a basis for cooperation in the wider world. Some people's gut reactions help you cooperate with strangers, which is what you need to do if you're going to solve large-scale problems like climate change. Other cultures are not so into this.

If you look at questions on the World Values Survey, questions about your attitude towards tax evasion or jumping the turnstiles at public transportation stations, the places where people came away with a lot of money were the places where people had very negative attitudes towards those things. Places where people had more lax attitudes towards those things were places where you saw in the lab people playing this game in a way where people didn't come out of it, where they suffered, essentially, the tragedy of the commons, when it's people who don't know each other, who they don't see as part of their own personal social circle.

Another example of our gut reactions, our intuitions, not serving us well in the global sphere, or at least creating a kind of inconsistency: A long time ago, philosopher Peter Singer posed the following dilemma to the affluent world [See Peter Singer's 2011 talk at Carnegie Council, where he discusses this]. He said, suppose you're walking along and there's a child who is drowning in this pond and you can wade in and save this child's life. But you will ruin your fancy new Italian loafers and your suit if you do this. How many of you think it's okay to let the child drown because you're worried about your $1,000 clothes or whatever they are? How many people think that's okay? No hands are going up.

Different case: Somewhere on the other side of the world, there are children who are badly in need of food and medicine, and a donation of, say, $500 or $1,000 can probably save someone's life, maybe several people's lives. At least it has a good chance. You say to yourself, "Well, I'd like to save them, but I was thinking of buying a nice set of clothes. I have older clothes, but I'd like some new ones." How many people think you're a moral monster if you spend your money on things you don't really need but are nice to have, instead of using it to save starving children on the other side of the world? This time no hands are going up.

How many people think it's okay, it's morally permissible to do that? All of your hands should be going up, because we all do this, myself included.

But there's a tension here. In some sense, we see it doesn't really make sense, but, of course, it's what we all do. I do it, too. So what's going on here?

With a student, Jay Musen, we wanted to do a more controlled experiment version of this. Borrowing ideas from philosopher Peter Unger, one version goes like this: You're on vacation in this lovely but poor country. You have your little cottage up in the mountains overlooking the coast. A terrible typhoon hits. There's devastation along the coast. There's flooding. People are not going to have food. The sewage is everywhere, and disease is becoming a big problem. Do you have an obligation to help?

In our sample, 68 percent, if I recall correctly, said that you have an obligation to help. The way you help—don't go down there; just make a donation to the aid organization that's already working on the ground. A majority of people say that you have an obligation to help in that case.

We ask a different group of people, randomly assigned—everything is the same as the way I described it before, except that now your friend is there and you're at home. Your friend has a smartphone and is showing you everything that's going on. You know everything that your friend knows about the terrible typhoon, and you have just as much ability to help. You can donate to this international aid organization just as easily. The only difference is, are you at home or are you there as you imagine this scenario?

Now about half as many people say that you have an obligation to help. It's kind of strange, in a way. Everything is the same. The situation is the same. You have all the same knowledge. You have the same ability to help. It's just that where you are sitting makes a huge difference.

Why should that be? From a certain perspective, it doesn't make a lot of sense, but if you think about this like a biologist, it makes a lot of sense. We evolved, as I said, to cooperate in small groups. If there's someone in your tribe who is drowning right in front of you, you're better off saving that tribe member. People on the other side of the world either don't exist to you or they're the competition. So it makes sense that we have this split, from a biological perspective. But that doesn't necessarily mean that it makes moral sense. It makes sense biologically that we have heartstrings, but the strings are very short. You can't pull them very easily from across the world.

This is a case where I think there's a kind of myopia in our gut reactions.

Let me give you one more example. Talking about international affairs, a lot of the problems we deal with are very big problems involving large numbers of people. The way we think about large numbers of people, it's a very unnatural thing to do. Would you like to save 10,000 lives this way or 20,000 lives that way? Not the question our hunter-gatherer ancestors were facing.

How do we think about these things? With a student, Amitai Shenhav, we did a brain imaging study where we had people make judgments about saving people's lives. We say, okay, you're on this Coast Guard rescue boat. You can save this one person who's drowning for sure or you can change course and save these other people over there. We varied the number of people you could save—it could be more 1, 2, 5, 20, 40—and we varied the probability that you could actually save them. You have a 50 percent chance of saving them. To make a reasonable decision, you have to keep track of how many lives are at stake, what are the odds of saving them, and put that together.

We found a part of the brain that seems to be keeping track of the magnitude and a part of the brain that seems to be keeping track of the probability and a part of the brain—in fact, the one I pointed to before, the ventromedial prefrontal cortex, among others—that is integrating those two pieces of information for something like expected moral value.

That's interesting, I think, in and of itself. But one thing that was interesting especially is that the parts of the brains that are doing this are the same parts of the brain that in humans, if you have people making self-interested financial decisions—you can have $1.00 or a 50/50 chance of winning $1.50 or $2.50—the same parts of the brain, the same system seems to be keeping track of the magnitude and the probability for these gambles. And it's really the same systems that you see making decisions about things like food in an animal like a rat. Should I go for the cheap, easy food here or go foraging for the really good stuff that I might not be able to get? It's the same kind of problem for a rat.

Something interesting about our responses to problems with large numbers of lives at stake, or whatever it is, is that we exhibit these kinds of diminishing returns. The first few lives you can save, that feels really important. But the difference between saving 100 lives and 105 lives—people don't react very differently to that. Ten thousand lives versus 11,000 lives—it all kind of feels the same.

Why is that? The problem may be that we're essentially using our rat brains to think about these global problems involving large numbers of lives. If you're a rat, the goods you're dealing with, like food, exhibit diminishing returns. You don't have a fridge. You can't save the stuff for later. Once you're full, you're full. It's not worth that much more. There's only so much that you can eat. There are only so many times that you can mate, or whatever it is that you're interested in as a rat.

That's what our basic neural evaluation system was designed for. Now we take that ancient mammalian system for attaching values to behavioral options, and we're using it to think about things like saving large numbers of lives around the world.

That doesn't make a lot of sense. Maybe there's a way to correct out of this. That's what I want to talk about finally. This is this idea of common currency.

Coming back to the parable of the new pastures—that is, we have different tribes with different values, some more individualist, some more collectivist; some pray to these gods, some pray to those gods, different ideas about things like gay marriage or abortion or whatever it is. How can we all get along?

I don't think there's a magic bullet solution, but I do think there's a way of thinking about these things that is better than what we currently do. I think the people who originally got this right were Jeremy Bentham and John Stuart Mill in the late 18th and 19th century.

What was their idea? Mill and Bentham were kind of amazing. They were way ahead on almost every major social issue of their day. They were among the first opponents of slavery, among the first proponents of free markets, workers' rights, what we now call women's rights. All of that stuff, they were way ahead. How did they get so far ahead? I have a psychological hypothesis, but let me put this in more abstract philosophical terms first.

They began with two basic questions. One is, who really matters? The other is, what really matters? The "who really matters"—their answer is essentially the golden rule. That is, everybody's interest counts equally. This is a very untribal thing to think. What you normally think is that I myself, my people, we count more and those people over there count less. But if you try to think of what a global meta-morality would look like—that is, trying to have a moral system that adjudicates among competing tribal values, in the same way that ordinary morality adjudicates among competing individual interests—this meta-morality is going to have to be impartial. So that's one.

The other part of it—and this is the more controversial part—what really matters, our real common currency, is our human experience, our capacity to be happy, our capacity to suffer. What they said is that if you think about all of the things that you value and ask, "Why do I care about that?" and then say, "Why do I care about that? Why do I care about that?" until you run out of answers, they say it all ultimately comes down to the quality of your experience. You say, "Why does it matter to be honest?"

Well, when you're dishonest, it hurts people. What do you mean, it hurts people? They lose money or something like that? Yes, they lose money. Why is it bad if people lose money? Well, then they can't have what they need to support themselves. Why do they need—you go on and on, and ultimately it comes down to somebody's well-being in the experience sense of avoiding their suffering or promoting their happiness.

That's highly controversial, but that's at least a plausible idea.

So you put this idea together and you say happiness, suffering, experience—that's the value behind our other values. That's how they all cash out. If we're going to have some kind of universal moral system, it's going to have to be impartial. They said we should try to maximize happiness in an impartial way.

Most philosophers today think this is a terrible idea, because there have been tons of counterexamples, intuitive counterexamples. I gave you one, the footbridge case. It seems wrong to kill that guy in order to save those extra lives.

Hold that thought and join me in just thinking about this in the abstract. It was this more abstract thinking that enabled them to make wonderful progress. A particularly striking example of this comes from Jeremy Bentham, who, around 1785, penned one of the first defenses of what we now call gay rights. The current issue is gay marriage here in the United States at this time. Some of you are for, some of you here may be against. But in Bentham's time, not only was gay marriage not legal, gay sex was punishable by death. I'm assuming that most of you think that that's not okay. But at the time, everybody thought that was okay, except for Bentham. How did he get to this crazy idea?

He had this moral framework. He had done a lot of thinking about morality and came to the conclusion that the quality of experience is what ultimately matters and that we should be looking at the quality of experience from an impartial perspective. He said to himself, "I've been trying for years to figure out how to justify the rules that we have against being gay, and I can't do it. So maybe it's actually not so bad."

With that crazy thought—crazy at the time—he leapt ahead two centuries in moral thinking.

That's manual mode. That's thinking slow. He had the same gut reactions as everybody else, but he didn't take them too seriously—or he took them seriously, but he didn't give them veto power, so to speak, over his own moral thought.

Those gut reactions are very good. Let me go back to the case of the trolley, because this is the kind of thing that has convinced most philosophers that this kind of thinking is terrible. It's good that we have negative emotional responses to pushing each other off of footbridges. It's good. We don't want people going out and being violent. Then you create this bizarre circumstance in which committing this act of interpersonal violence is guaranteed, we stipulate, to promote the greater good. No wonder you've got your automatic settings in tension with your manual mode in that case.

But our gut reactions, those automatic settings, are not necessarily as reliable as you might think. Let me give you an experimental example of this.

You might ask, what is it that triggers that emotional response? We give people a version of the footbridge case, like I described. About 30 percent of the people in this sample say that it's okay to push the guy off the footbridge. Describe it differently so that, instead of pushing the guy with your bare hands, there's a trapdoor on the footbridge, and all you have to do is hit a switch that will drop the guy onto the tracks. Now twice as many people—a majority, about 60-something percent—say that it's okay—the difference between pushing with your hands and dropping with a switch. Now, that doesn't seem like something that matters morally. It's a similar thing, like the distance in the charitable giving case.

It's good that you don't like the idea of pushing people off a footbridge. It's not something that we want to train out of people. But I think it's a mistake to use these kinds of weird hypothetical cases and allow them to have veto power over our candidates for a global moral philosophy.

Let me try to put this all together.

Two kinds of moral problems: the tragedy of the commons, the basic moral problem that we evolved to solve with our gut feelings—and there I think it makes sense to rely on our gut feelings; that's what we evolved to do—the tragedy of common-sense morality, the modern world problem, the problem of dealing with different groups, with different values, with different interests. What we need is a common currency. We need some kind of higher order system that we can appeal to in order to allow groups with different values to get along.

That higher order system is not going to feel good, because what feels good is what evolved to feel good for our basic moral interactions with people in everyday life. But if we want to have some kind of a system where we can make tradeoffs among groups with different ideals, we're going to need some kind of more abstract system that's sort of everybody's second moral language. It's not the language we speak most comfortably, but it's one that we can all agree on.

You might say—I get this all the time—if you're looking for evidence about what's going to promote people's happiness, of course, my side is going to say what I want for most people's happiness and the other side is going to say what they want for most people's happiness. That's true, but at least there's a fact of the matter. At least, if we move ten steps forward and nine steps back, we can figure out what actually promotes human happiness and what actually doesn't.

It's a messy, biased process, but at least there's a tractable goal, whereas if we just think about what rights we believe in, our rights, I think—and I'm not going to defend this point here—our conceptions of rights are really just fronts for our gut reactions. Whatever we feel in our hearts, we say this is what rights we have or this is what duties we have.

My general advice—and I realize I've given you a very short version of this; if you want to see the full defense, you'll have to look at the book—is, when it comes to the morality of everyday life, of individuals versus groups, me versus us, selfishness versus morality, think fast, mostly. But when it comes to the problems of the modern world, you're going to have to think slow, and we're going to have to find a common currency. You may or may not agree with Jeremy Bentham and John Stuart Mill and me that trying to ground this in a conception of morality where what ultimately matters is global happiness—you may or may not agree with that, but I think that's the only way that we're going to find a kind of international ethic.

Immanuel Kant famously said at the end of the Critique of Practical Reason that there are two things that fill his heart with wonder: the starry heavens above and the moral law within. I actually think the moral law within is a mixed blessing. It allows us to get along with each other, but it also creates the modern moral problems that we face, the problems that divide us. What fills me with wonder is our capacity to do both, to have these gut reactions and take them seriously, but to also step back from them and perhaps reason our way to something better.

Thanks very much.

Questions

QUESTIONER: James Starkman.

Just a couple of comments on a few of your parables. First of all, you would have put John Wayne and Randolph Scott out of business if the collectivist point of view had prevailed on the plains of the West.

JOSHUA GREENE: Absolutely, yes.

QUESTIONER: That would have been a very bad side effect.

JOSHUA GREENE: It was the Wild West.

QUESTIONER: My question is, in the modern world the essential issue globally right now is intolerance. Intolerance seems to be, not among all, but among a large group of Islamists, a key issue. In fact, a very literal reading of the Koran is that the infidel does not deserve to live, should be killed. How can we project your teaching to that whole part of the world, which is probably a third of the world?

JOSHUA GREENE: What you're essentially pointing to is a kind of tribal morality. I don't mean tribal in the sense of a small group, but tribal in the sense of rallied around proper nouns that emerged out of one cultural tradition. I don't think it's just radical Islam or the Taliban. Everywhere in the world, whether it's nationalists in Europe or evangelical Christians in the United States, or whatever it is, there are tribal moralities all over the place. I don't think there's a quick fix for this.

I think that the way forward is really self-knowledge. All humans have the capacity to ask questions like: How did we get here? And why do we behave the way that we behave?

I can come to, let's say, someone who has a traditional Islamic background and stands by those values, and I can talk to him about the trolley problem. I say, "Hey, does it seem wrong to push the guy off the footbridge?" "Yes." And I can tell—"Oh, my gosh, this applies to me, too."

Whatever story I'm going to tell about how that got there, that's going to apply to them as well. You can get people to start to see these things.

I think this is a process that takes generations, but once we understand our moral instincts as imperfect biological and cultural tools for solving a real problem, then we can have a little bit of a distance on them. It's when you think that your gut reactions are delivered directly from God or from the universe that the problem is intractable.

I don't think you can just sort of make an argument and have people go, "Oh, okay, all right, that makes sense." You're not going to talk people out of gut reactions that have been trained into them since birth, essentially. But slowly, over time, we are all capable of learning about ourselves and making sense of these patterns and looking at the evidence. My hope is that we can see our gut reactions for what they are and understand that they have strengths and that they have limitations. But it's a long haul.

QUESTION: Susan Gitelson.

With all of this fascination we have with what you've been saying, how would you deal with genocide, where people kill their neighbors, with whom they have been living and presumably getting along, for better or worse? We have such cases as the Holocaust and Rwanda and Cambodia and even, now, the dilemmas with Syria. What do you do about genocide?

JOSHUA GREENE: I don't have any—obviously, it's hard to think of a bigger problem, a more acute problem. I think that the solution essentially is more of the same.

As bad and horrible as those things are, as my colleague Steven Pinker has beautifully demonstrated in his book The Better Angels of Our Nature, [Check out Steven Pinker's 2012 debate with Robert Kaplan, "Is the World Becoming More Peaceful?"] we have actually been getting better and better at dealing with these problems. Of course, you don't read headlines saying "Things Are Getting Better in the World." You read about the horrible stuff.

So the question is, as bad as things are, what have we been doing right? I think part of it is that these kinds of Enlightenment values, where everybody counts and everybody deserves a decent level of treatment, do more promoting of that.

I think one particularly acute problem here is what we in the moral cognition business call the act/omission distinction. That is, preventing these things requires the intervention of third parties. Most of us are very unwilling to cause harm ourselves directly, intentionally. But we're far more willing to not act when there's harm that we can prevent. You could see that in the example I gave about the people on the other side of the world who are suffering from the typhoon.

I think that if you want to solve big problems, you need the moral resolve of third parties. That requires not taking the action/omission distinction so seriously—that is, the distinction between harm that I'm causing and harm that is out there caused by someone else or something else that I could prevent. In general, we don't feel like we have to do something about this, at least not as strongly.

You would never cause an earthquake—if I gave you $10,000, but if you'd have to cause an earthquake that kills 10,000 people, would you do it? Of course you wouldn't do it. But if I said, "Would you give $10,000 to save some of these people from the earthquake?" a lot of people would say no or say yes and then forget about it.

I think the number one thing we can do is to take more seriously the need to intervene to solve problems that we ourselves didn't play a role in causing.

QUESTION: John Richardson.

In the world of Dante's Inferno, I believe the ninth circle, the lowest circle, is inhabited by one sort or other of the deceivers. I'm curious. When you have this world where people have to negotiate and agree on a code of conduct, how do you deal with the deceivers?

JOSHUA GREENE: I don't know if I have a terribly original answer to this. I think just greater transparency. Then you say, well, that's easy, of course. But where does transparency come from? I think enforcing transparency comes from having strong global institutions. Take the case of weapons of mass destruction. What keeps weapons of mass destruction in check is having these global institutions, where people have not just the desire, but the power to really go in there and enforce it.

People will deceive as much as they can get away with in order to favor their "us" over someone else's "them," and again, it's really the commitment on the part of third parties to have strong global institutions that make it harder to deceive. I think that's the only long-term solution.

QUESTION: I'm Catherine Stevenson.

I just have sort of a corollary question from one of the first questions. Do you think that maybe one of the psychological factors in the sort of us-versus-them problem might have to do with having an affinity for a particular group, for the sake of being in a particular group, like having a banner as opposed to any kind of gut reaction or deliberative process?

JOSHUA GREENE: I think that forming an "us" is a basic human need. What's the worst thing that we do to people when we want to punish them? We put them in solitary confinement. And that's considered the worst thing, even among murderers. Humans are thoroughly social beings. The need to be part of an "us" and to find a kind of higher meaning in one's group membership I think is completely integral to our nature.

QUESTION: I'm Lauren Blum.

I'm intrigued by the idea of the Enlightenment values and the role of the individual. But in many groups there's no sense of the individual, so you don't have a me-versus-us. There's no "me," whether it's an ideology or, I would say, in other cultures. So how do you work with that, when there's no "me"?

JOSHUA GREENE: Some culture are more individualist and some cultures are more collectivist. Perhaps you meant this more metaphorically. I don't think it's true that there are any cultures in which people care about others in the group as much as they care about themselves. If there were, there would be no need to have rules. There would be no need to have institutions. There would be no need to have internal carrots and sticks to get people to behave more nicely.

But maybe the question is, if you have a group that is very much collectivist—I want to make sure I understand the question—how can that group get along with other groups that have different values?

QUESTIONER: How do you create that common currency when it comes from the role of the individual, which is what Bentham and Mill were looking at?

JOSHUA GREENE: I think the kind of common currency that Bentham and Mill favored really cuts across this problem. They faced the question: Is slavery wrong? They couldn't appeal to gut reactions about this, because most people's gut reactions among the people they were trying to convince was that this is okay. So they said, "What we think is really behind all the values that we all endorse is this idea that what really matters is our subjective well-being, what really matters is our suffering and our happiness."

In principle, this can be measured. It's hard. It's messy. But in principle, there's a fact of the matter about just how happy or unhappy each person is. What they are saying is that we should be doing whatever is going to produce the greatest amount of happiness and minimize suffering.

People may agree with that or disagree with that, but that can be understood as an evaluative standard, independent of whatever your tribe says, independent of how collectivist or individual your tribe is. I think the genius of their philosophy is not that it intuitively appeals to everybody, but that it makes a kind of at least abstract sense to everybody, and that's a start.

I'm not sure if that's getting at the question that you have.

QUESTIONER: I think it does. I'm just concerned that, with an ideology, there is no sense of the individual at all. That's what I'm trying to ask you, really. With a group where there really isn't a sense of the individual, how do you—it's not a group of shepherds that group their goods together.

JOSHUA GREENE: The modern problem, as I have stylized it, is different "usses." It's "usses" versus "thems." You could have an "us" that is a somewhat collective, somewhat individualist "us," where there's a lot of individual prerogative, but there's still a sense that you have to do things for the group and there's still a sense that our group is more important to us than other groups. And you can have one, as you described, where the individuals care very little about themselves and it's all for the hive.

So now we have a tension between this very hive-ish "us" and this more moderate "us." As long as they both favor their respective groups, in a sense it doesn't matter how the internal workings are, whether the individuals give up their sense of self for the group or whether there's some kind of balance. It's really about the clash between the groups.

I think that even though it's interesting to think about how this plays out in different kinds of groups that are more versus less collectivist or extremely collectivist, where the self is completely given up to the group, the big modern problem is the problem of, once you've got the groups thinking "us, us, us," how do you solve that problem?

QUESTION: Morgan Block.

Earlier you were saying that it's great that all humans have the capacity to ask questions. You also mentioned that there sometimes becomes a problem when people have a higher power or a god that they believe in. How do you get people out of this vicious cycle where they're asking questions, but they are constantly returning to the moral code set forth by their religion or some other belief?

JOSHUA GREENE: I think, again, self-knowledge and knowledge more generally is the key. And it's not a quick fix. Some people, let's say in the United States, are raised as evangelical Christians who believe that the world was created in seven days. Then you say, "Well, look, you went to the museum. You saw that dinosaur. How do you reconcile that? That was pulled out of the ground. You learned about chemistry, right? You learned about radiocarbon dating. Are you saying that God put these dinosaur bones in there with just the right ratio of carbon 12 to carbon 14 to make it look like it's 67 million years old?"

You would say, "Look, here's a skeleton of a primordial whale with a little bit of a pelvis. Where did that come from? You can look at the bone. You can see it in the museum."

Members of all tribes are smart. Members of all tribes can look at this and say, "How do I put those two things together?" You either have to come to the conclusion that, okay, this is somehow a fraud, this is a fake whale pelvis that someone's waving in front of me, or you have to reconcile those things.

With enough inundation of facts about our minds and facts about the world, you can come to question that tribal reality. It's hard and it's slow, and some people will stick to it no matter what. But—as I said, ten steps forward, nine steps back—it's possible for a scientific understanding of ourselves and the world to move us in, I think, a more productive direction.

QUESTION: Sharareh Noorbaloochi.

I have a question about your assumption about the brain just having a dual processing system. There are many more studies coming out showing that there's a more dynamic system between emotion and reason. It's not so distinct. Your camera metaphor wouldn't necessarily fit that. There's no switch. Actually reason can affect your initial gut reactions or emotional intuitions. How would you reconcile that, if that happens to be the case? We know the brain is very bidirectional. So end result: You can retrain your gut reaction. How would you reconcile that?

JOSHUA GREENE: I gave a quick talk here to a mostly nonscientific audience. I did not mean to imply that there is no back-and-forth between manual mode and automatic settings, that there are just these two separate systems like on the camera, where you can have one and the other, and use them completely separately. That's a misleading aspect of the analogy. I apologize for that.

My former postdoc and now professor at Brown, Fiery Cushman, has been doing some really interesting work trying to explain the origins of dual-process thinking. Basically—this is a very long story that I'll try to make short—there's this idea that comes originally from artificial intelligence, from machine learning, model-based versus model-free learning processes, where model-free basically means you don't have a map of the world that you carry around in your head. Instead, you learn to associate good or bad with different kinds of specific actions.

Why does pushing the guy off the footbridge feel bad? When you were a kid, you did something like that—you shoved a kid on the playground—and you were admonished for it or you saw somebody else get admonished for it. That gave you a feeling that the act itself is wrong. Now I put you in the situation where you can push somebody to save five people's lives, and it still feels wrong, because it inheres in the action.

Your model-based system—that is, I've given you information about what the causes and effects are going to be—that's your other way of thinking.

These things can interact. For example, I can describe to you and you can explicitly think about, this person got punished for this or this bad thing happened, and that can inform your feeling about the action. In the developmental process, these things interact, and they can interact in the moment.

I think this is something you don't want to lose sight of. I get a lot of neuroscientists who say, "Oh, it's not that simple." Of course. It's never that simple. But nevertheless, put people under time pressure, and systematically their judgments go one way. Put people under conditions where they think more reflectively, and their judgments systematically go the other way.

What that means is that this is right, or something like this is right, as a first approximation. If you want to divide the brain into sort of two continents, I would say there's one giant continent, the manual mode, and then a lot of smaller continents, all the different automatic reactions. That's right as a first pass. Now, does that mean that there isn't a little isthmus sometimes connecting this little island to this continent or something like that? It's complicated.

But I think the behavioral results indicate that this is a good starting point. But I agree, it's not quite that simple.

JOANNE MYERS: We, too, are under time pressure. But as you said, it's a good starting point. I thank you very much for sharing your ideas with us.

JOSHUA GREENE: Thanks for having me.

You may also like

Dr. Strangelove War Room. CREDIT: IMDB/Columbia Pictures

DEC 10, 2024 Article

Ethics on Film: Discussion of "Dr. Strangelove"

This review explores ethical issues around nuclear weapons and non-proliferation, the military-industrial complex, and the role of political satire in Stanley Kubrick's "Dr. Strangelove."

DEC 3, 2024 Article

Child Poverty and Equality of Opportunity for Children in the United States

This final project from the first CEF cohort discusses the effects of child poverty in the United States and ethical solutions to help alleviate this ...

DEC 2, 2024 Article

Global Ethics Day 2024 Reaches New Heights with Participation Across 70 Countries

On October 16, 2024, hundreds of organizations and thousands of individuals across nearly 70 countries celebrated the 11th annual Global Ethics Day.

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation