The Predictioneers Game: Using the Logic of Brazen Self-Interest to See and Shape the Future
The Predictioneers Game: Using the Logic of Brazen Self-Interest to See and Shape the Future

The Predictioneer's Game: Using the Logic of Brazen Self-Interest to See and Shape the Future

Oct 6, 2009

Iran, Iraq, Israel, and North Korea--all are rational players, acting in their own self-interest as they perceive it, and with game theory we can predict what they and other players will do next.

Introduction JOANNE MYERS: Good morning. I'm Joanne Myers, Director of Public Affairs Programs, and on behalf of the Carnegie Council I'd like to welcome you to our breakfast program.

Long ago, in 16th-century France, there was a man who would later become known as the most famous seer in history. His name was Nostradamus. By many accounts, our speaker this morning, Bruce Bueno de Mesquita, is his 21st-century counterpart. However, while Nostradamus looked to the stars and mysticism to divine his apocalyptic revelations, Professor Bueno de Mesquita relies on a more modern source to predict future events: the computer.

This modern-day prophet is not only a renowned political scientist, a master of game theory, but also a distinguished academic who teaches at New York University and Stanford. He has written many widely acclaimed articles and books.

For his presentation today, he will open the door to a new world of decision making, which he has written about in his book, The Predictioneer's Game: Using the Logic of Brazen Self-Interest to See and Shape the Future. In this publication he reveals how in many situations having advance knowledge, whether it is national security, foreign policy, business, or our day-to-day life, can be awfully useful.

Professor Bueno de Mesquita explains a controversial system of calculation that he has created to forecast the future and demonstrates the power of using game theory and related assumptions of rational and self-seeking behavior for accurate prediction. He argues that decision makers can predict the outcome of virtually any international conflict, provided the basic input is accurate.

The computer model our guest built and has perfected over the last 30 years involves a cold, hard system of calculation that allows individuals to think strategically about what their opponents want, how much they want it, and how they might react to every move. Analyzing other people's incentives means also analyzing how altering their costs or benefits can be used to change their behavior, and even engineer the future, to produce more agreeable outcomes.

Among his many successes I would like to point out just a few. He was correct in predicting who would succeed Brezhnev, the exact implementation of the 1998 Good Friday Agreement between Britain and the IRA, the Tiananmen Square massacre, and the Persian Gulf War—all well before the fact.

Now, some of you may be skeptical, but be forewarned. Professor Bueno de Mesquita claims a 90 percent accuracy rate in his use of game theory to predict political trends, and his fans include many Fortune 500 companies, the CIA, and the Department of Defense.

If you're still as curious as I am as to how this is accomplished, then I ask you to please join me in giving a very warm welcome to our guest today, Bruce Bueno de Mesquita.

Thank you for coming.

BRUCE BUENO DE MESQUITA: It's a pleasure to be here.

I do hope that by the end of the morning you will realize that predicting and engineering the future is not all that mysterious. It's not actually hard to do. It is a source of constant amazement to me that people are surprised that it can be done.

What I'm going to try to do this morning is explain why it can be done, more or less how it's done (but I won't go into equations unless I get asked—I like to do equations), and talk a little bit about the track record and some events as I see them coming up in the future.

Let me start with that Nostradamus guy, though. The History Channel a year ago did a documentary on me, called "The Next Nostradamus"—God, I hope not!—although there were many important similarities. He was a professor of an actual, serious subject, of medicine, and probably quite accomplished; apparently, he did pretty well at treating people who had the plague. But the prediction stuff— he stared at a bowl of water for very, very long periods of time, and somehow that produced quatrains that made absolutely no sense. You read them now and they still make no sense. Amazingly, people after the fact said, "Oh, he predicted . . ."—World War II, the modern end of the world, or what have you.

I am trying to engage in science. So I don't stare at a bowl of water. I stare at something much more serious, a computer screen, and let it flicker and fill in my eyes and hope to get insight.

Let me start off by saying that if you want to experiment with doing some predictioneering yourself, I've created a website for the book—I apologize for the plug— I mention this because one of the pages on the book allows you to download a stripped-down version of the model. It is doing all of the calculations, but it is only allowing you to predict. There is very little capacity to engineer the world. I thought that was a little risky with apprentices. I did watch Fantasia. I do remember Mickey Mouse as the Sorcerer's Apprentice. You don't want to give the book of spells to just anybody. But you can build your own data sets and you can experiment and see—you could predict how family conflicts will come out, or what have you. If you're clever about it, you'll be able to see a little bit of engineering. The full model is much more complex.

So what is it that makes it possible to predict people's behavior? I start from a few very simple, very basic, sometimes controversial, assumptions. I assume that people have interests—I don't think anybody would dispute that people have interests—and that they try to do what is good for themselves, they try to do what they believe will make them better off.

There aren't a lot of people who go around trying to harm themselves. I'll come back to that in a moment, because you're thinking: "Aha! But what about suicide bombers?" They're rational. I'll explain why they're rational. Just give me a moment.

So I assume people are rational; that is, they are self-interested. Rational does not mean that they have some deep insight into the world. It does not mean that they calculate every possible thing that could happen to them—that, by the way, is not rational, because making those calculations is costly. So you don't try to solve something beyond the point where you think the marginal cost is greater than the marginal gain. So it's not rational to try to work out every possibility that could happen.

But you do try to work out what the constraints are, what's in your way from getting what you want. You may have wrong beliefs about that. But basically everybody has values and they have beliefs. They act on their values and their beliefs and constraints.

So what are the constraints?

Well, there are constraints in nature, by which I mean, for example, you might be interested in owning the city of New York. Your pockets are probably not deep enough to own the city of New York, so you're constrained. It's not a real choice. You can't do it.

The more interesting constraints are that there are other people, and those other people have values and interests. Often, their values and interests are different from yours. They want to stop you from doing what you want to do, and you want to manipulate them into getting out of your way.

So if you are President Obama, you're thinking, "I don't want Iran to have a nuclear program. I certainly don't want them to have a nuclear weapon." Well, that's great. And he's a very powerful person, he's the leader of the most powerful country in the world. But it's not up to him. He doesn't get to decide whether or not the Iranian leadership will pursue the development of a bomb. He gets to try to persuade them not to be in the way of what he wants. They of course, meanwhile, are trying to persuade him or coerce him into getting out of their way.

That is the fundamental problem of all social interaction. And because people are pursuing their self-interest in social interactions, and because they are constrained by the interests of other people, they become predictable, because it doesn't make sense to do what is against your interest, to try to trick the computer so to speak, because you're harming yourself. So if we can work out what is it that you are interested in achieving, what is it that you believe about other people and the situation that you're in (the context), and what are the constraints that you face—so what do you want, what are the constraints, what do you believe about others—if we can work that out, then we can predict your behavior, taking into account that you are taking into account what other people will do.

This is the standard problem for anybody who plays chess or plays bridge or plays poker.

Let's take poker as an example. A decision theorist—I try to be an applied game theorist, not a decision theorist—playing five-card draw looks at a set of cards and says, "Well, if I put down two cards, what are the odds that I will pick up two aces or I'll fill out a flush, or what have you?" That's a straightforward probability calculation. It's not a very hard probability calculation. But it's also not a very good way to play poker, because poker, like most interesting problems in life, is not about how good the cards are that you hold; it is about how well you persuade other people that your cards are good when they are not.

So if you're Kim Jong Il, you have been dealt a really lousy hand. You run a diddly-squat little country with very few resources, a per capita income that is minuscule, and yet you have played your cards really well. You have put yourself on the world stage. You have gotten people to be worried about what you are going to do. I bet he's a very good poker player.

I, by the way, am not a good poker player. We study what we don't understand. It's true. It's true. It's what we do. We study what we don't understand.

So when you are dealt cards in poker or when you are dealt a hand in bridge, you understand that a big part of the game is the wagering, it's making bids, putting money on the table, and so forth, because that's the way to signal the other side about how good your hand may be. Now, they know that you may be lying. Oh, I'm not supposed to use a word like that—bluffing. "Bluffing" is the game theorist's right word for lying. They know that you may be pretending to have better cards than you do. But they also know that you know that. So you may be clever enough that sometimes you bluff and sometimes you don't. So if they always challenge what you claim, putting a lot of money on the table, you're going to learn to adjust and really bet a lot when you have a good hand they're going to lose their shirts.

This is a complicated problem. This is the sort of problem that game theorists try to solve. It's the kind of problem I try to solve. But I don't play poker. The last time I played poker was about ten years ago. I lost every penny that was in my wallet. It was at a friend's house. They had a very good time with "the game theorist doesn't know what he's doing"—which was true.

I'm trying to solve problems out there in the world. One of the things that I realized a long time ago, when I started to do this in the late 1970s, is that trying to solve a business problem, like a lawsuit or a merger or an acquisition or dealing with a regulator and trying to change a regulatory environment, or dealing with a foreign policy problem, like trying to overthrow an unfriendly government or protect a friendly government or persuade a government to adopt policies that we would like them to follow instead of the policies that they want to follow—these are all the same problem.

Iran's nuclear ambition and how to deal with it and the problem of settling a lawsuit are exactly the same problem. They are self-interested people with different interests competing with each other where they could negotiate a resolution to their problems and they can do so in the shadow of the threat of coercion. All of the problems that I try to model, all of the problems that I try to predict, are problems involving the possibility of negotiation in the shadow of the threat of coercion.

Sometimes people ask me to predict the stock market. Except for hedge fund kinds of settings, where there are regulatory or political decisions that will affect how a segment of an economy performs, the stock market is something I don't have a clue how to predict, because the stock market is an implicit negotiation between buyers and sellers, but there is no threat of coercion. So it's not a problem that I can address.

I get phone calls occasionally—unfortunately, it's very easy to find people's phone numbers these days. I got a call from a fellow in Tennessee. He chatted with me for a while, was very sweet, and then he whispered, "Just tell me what numbers will win the lottery." I'm not good at random number generation. I don't predict lotteries.

What I do predict is political problems, business problems, those sorts of things, and I do so based on values and beliefs and constraints.

What is it that game theory brings to the table that the way most people's approach to policy problems does not bring to the table? Game theory explicitly deals with how a calculating, self-interested person addresses uncertainty, how people address risks. Risks are different from uncertainty. Risk is the probability that something will happen; uncertainty is not being sure about what the probability is that something will happen. That's not the only form of uncertainty, but it's an important form.

Game theory deals with expectations. And game theory, most importantly, deals with strategic play—that what I choose to do is not driven just by what I want, but is also driven by what I think your response will be if I do this or I do that. I might prefer to do this, but the outcome from doing this may be not as good for me as doing that. So I do that.

People make a mistake. They look at things and they see things turn out badly and they think, "Obviously these people weren't rational. Could Hitler have been rational? How could he be rational? He lost the Second World War."

Well, yes, after the fact we know how it turned out. Before the fact—you know, people make decisions before the fact. He didn't know how it was going to turn out. He actually had a very good run. He had a pretty good shot at winning the war.

So who is rational and who is not rational?

Suicide bombers—I alluded to them earlier—are rational. I'm going to be obnoxious about suicide bombers. I'm going to point out some simple facts and then I'm going to relate them to other facts in history.

Saddam Hussein used to pay the families of suicide bombers $10,000. For a while there was a pretty good supply. These were Palestinian suicide bombers. There was a pretty good supply of Palestinian suicide bombers. Then the supply began to taper off. There weren't enough people willing to blow themselves up. He raised the price to $25,000 and there was an outpouring of people ready to blow themselves up. They were responsive to incentives. $25,000 for a Palestinian, given their circumstances, their very meager incomes, was a lot of money.

Now you think, "Oh, that may be coincidence, that's obnoxious." I point out in The Predictioneer's Game that the United States identified a group of people that we, in excellent Department of Defense tendency, called CLCs. They love acronyms, especially three letters. If it's three letters it's really good. The National Geospatial Intelligence Agency, which is the guys who do all the photographic satellite stuff, are called NGA; they're not called NGIA. That fourth letter means you're not important.

So we had CLCs. What are CLCs? Concerned local citizens. There are about 60,000 of them. What were these concerned local citizens? Were these the folks that you call when you're going on vacation, "Could you come over please, feed the cat?" "I'm going to be late from work, my kids are coming home from school; could you let them in and give them a glass of milk and maybe a little snack and I'll get home?" Well, maybe they did those things—I don't know.

But that's not who they were. They were all anti-American insurgents. Some of them were members of al Qaeda. And 60,000-70,000 of them became defenders of American interests against Muqtada al-Sadr and so forth, against the people who were problems for us in Iraq. Why did they do that? Did they have a sudden realization, "Oh, we're the good guys?" I don't think so.

But we were paying them. Anybody know how much we paid concerned local citizens, how little it takes to buy an insurgent and make them a loyal defender of American interests in Iraq? Ten dollars a day— 60,000-70,000 of them.

Well, as it turns out, the per capita income in Iraq at the time was $6.00 a day. So it's not very hard to hire somebody if you go up to them and say, "You know what, I'm going to increase your salary by 67 percent, practically doubling your salary, and all you have to do is stop shooting at me, and maybe occasionally you'll want to bludgeon somebody who is thinking about shooting at me. I would appreciate that." Ten bucks a day, they swung over, just like for $25,000 the suicide bombers swung over.

You think this is unusual, maybe this is a Middle Eastern thing? During the Second World War there were kamikaze pilots. The mythology of kamikaze pilots is how eager they were to die for the emperor— which they may have been; I'm sure that many of them were. But as it turns out, when the Emperor announced that their family's debt would be forgiven, there were a lot more volunteers to be kamikaze pilots than before his announcement that family debt would be forgiven.

And if we go back to the period of the Crusades in the High Middle Ages, when Innocent III, the pope who rose to power in 1198 - a very clever guy, but that's another story - when he called for a Holy Crusade, hardly anybody volunteered. So then he said, "Well, let me tell you if you come and fight for the church and you die fighting for the church, you are guaranteed heaven." Hardly anybody showed up. It's nice, heaven, a very pleasant place I'm sure, but it didn't get that many people.
Then he said, "If you fight and you die in the Crusade, I guarantee you heaven and your family's debt will be forgiven." A huge outpouring of volunteers.
People respond to incentives. So suicide bombers are rational.

How about the other end of the spectrum, the sacred type, the very, very, very, very good person, the seeming altruist, sacrifices their lives for the benefit of others? I think that is, by the way, the way suicide bombers see themselves, sacrificing their lives for the good of others.

Mother Teresa—could Mother Teresa have been rational? Could a person who did all the wonderful things that she did for other people be rational? You bet, and so therefore predictable.

We might notice a few things about Mother Teresa. A lot of nuns, I imagine, are good deed doers. It's good to be a good deed doer. I believe in good deeds. But most nuns live out their lives in obscurity and anonymity. They don't, for example, design their own uniform so that when they walk around in the street you say, "Oh, that's not just a nice old lady, that's Mother Teresa"—white sari, blue trim, leather sandals. No anonymity there. And of course, if they live their lives out in anonymity, they don't win Nobel Prizes.

Now, Mother Teresa had a problem—and then I'll leave her alone, because I get in trouble when I talk about Mother Teresa. She had an internal problem and an external problem.

The internal problem was the problem of Saint Bernard of Clairvaux, one of the founders of the Cluniacs. Bernard had the following problem: He left the Benedictine Order in order to follow the laws of God more closely than the Benedictines were. And then he had a great insight: "I'm following the laws of God more closely than anybody else. I must be guilty of the mortal sin of hubris. I think I'm better than other people. So if I don't follow the laws of God, I go to hell; but if I do follow the laws of God, I feel I'm a better person, so I go to hell for that." It's a tough spot to be in. It's a little tough. Mother Teresa had this problem. Read her book.

She had another problem, the external problem. Maimonides, approximately a contemporary of Bernard of Clairvaux, defined what it meant to do charity at the optimal level. Charity at the optimal level involves three conditions: You give anonymously, the recipient does not know who the giver was (Mother Teresa was not anonymous); you don't know to whom you are giving (she knew to whom she was giving); and you make the recipient self-sufficient (she didn't). Enough of her. But she was rational. She was advancing her self-interest.

So this model assumes that people are rational; that they have two dimensions of interest—one is to influence policy outcomes, to affect what happens on the question of interest, nuclear weapons or what have you. But there is another dimension—this is the dimension that is so important for somebody like Mother Teresa— credit, being seen as important in how an issue is resolved, being seen as somebody who put together a solution.

There is a tradeoff between putting together a solution and getting what you want. If you stick to your guns, being fully resolved—"I believe in this position"—and you go down in a blaze of glory, you don't get much credit for success. But you attach a lot of value to the position you took.

You may also be the leader whose finger is up in the wind—"What is popular? I could do that, and then I'll be seen as somebody who helped put an agreement together and I get credit."

The model measures the extent to which people trade off between these two dimensions. It does that through a fairly subtle mechanism. Let me move along quickly to what that looks like and then give you some forecasts.

What do you need to make a prediction? You need to know who's going to try to influence the decision; not just who the decision makers are, but who will try to influence them—their advisors, lobbyists, protestors. Who is going to try to influence the decision? For each of those people you need to know three to four, depending on which of my models you are using, pieces of information. In my new model, the model that's on the Web, there are four pieces. You need them as numbers.

So you've defined an issue, something on which a decision has to be made.

What do they currently say they want—not what in their heart of hearts they want, that's very hard to know; not what do they think they will get—that's what we're trying to analyze—but what do they say they want. That's a strategically chosen position from which we can work backwards to how much are they looking for credit, not getting out on a limb, and how much are they looking to advance their particular agenda. What do they say they want?

How focused are they on the problem? How willing are they to drop what they're doing and attend to the issue when it comes up?

So imagine you're listening to me talk, you're still awake—that's good. Your cell phone vibrates. You take a little peek. President Obama is calling you. Are you going to sit here and listen to me or are you going to leave the room and take the president's call? Of course you're going to leave the room. I know what the relative salience is. I would too.

So we want to know how willing are you to put your effort into the issue when it comes up. You could be a very powerful figure but not care about an issue, in which case you don't actually exert very much influence.

So how salient is the issue? How much influence could you exert and how resolved are you? If we know how resolved you are, where you stand, how salient it is, and how much clout you could exert, we can work out your interaction with everybody else and how the game will play out.

So let's make some predictions. I have one last quick thing before I make predictions.

In the introduction it was said that I claim a 90 percent track record. I am very sensitive to that phrase. It is not accurate. I do not claim anything.

The Central Intelligence Agency in a declassified study of more than 1,200 applications says that the accuracy rate is 90 percent. That's the CIA that says that.

Academics who have reviewed the work also say—because I've published hundreds of forecasts before the events occurred in refereed, peer-reviewed journals—they too say that those predictions are accurate about 90 percent of the time.

I am not predicting that the sun rises in the east and sets in the west, although I don't disagree with that. But I've tried to predict hard things.

I want to talk very quickly about two sets of predictions: global warming, which occupies a significant part of the last chapter of the book, and the Copenhagen Summit.

Copenhagen will produce a feel-good agreement. The media will love it. It will mean nothing—nothing. It will have nothing to do with global warming.

President Obama in his UN speech said that in order to control global warming we have to have a universal agreement, everybody has to be on board. That's a feel-good statement that is an excuse for doing nothing. We don't have to have everybody on board. We could unilaterally raise tax on gasoline, fertilizer, things that produce greenhouse gas emissions. We're reluctant to do that. Why? Because politicians are not about the national interest; they're about getting reelected. Now, to the extent that advancing the national interest gets you reelected, then you're in favor of the national interest. Kim Jong Il doesn't seem to be worried about the national interest in North Korea. To the extent that it doesn't, you don't worry about it.

So what do universal agreements do? They do one of two things. They either are agreements that ask people not to change their behavior, in which case they get tremendous compliance—they're decorated with a lot of nice words, but they're not asking for change.

had 175 signatories. One hundred and thirty-seven of the 175 signatories were asked to do nothing. There was no change in their emissions stipulated in Kyoto for 137 of the 175 signatories.

So either you're asked to do nothing, in which case you comply; or you are asked to make great changes—Japan was asked to greatly reduce its emissions; Britain was, and so forth—in which case the agreement—read the text—will have no teeth. There will be no mechanism to monitor or enforce violation.

So Japan, the host of Kyoto, a few weeks later announced, "It's not possible for us to meet the targets we signed on to meet." The British a few weeks later, "Not possible to meet the targets."

Copenhagen will produce the same. When you go for universality, what you go for is, "I'm not asking you to do anything" or, "If I'm asking you to do anything, I'm going to close my eyes, I'm not going to be able to enforce it, I have no mechanism to punish you." That's what will come out of Copenhagen.

What is more likely to solve the problem—or at least address the problem—of global warming is that the cost that global warming imposes will lead to the development of superior technologies at affordable prices and technology will solve the global warming problem.

Iran's nuclear program: Back in February I gave a talk at the TED Conference and I made three predictions about Iran.

Iran will not build a nuclear weapon. They will develop enough weapons-grade fuel to show the world that they know how to make a weapon, but they won't make a weapon.

and Ahmadinejad and that clique are on their way down in political power. Jafari, the Revolutionary Guard, and the Qum clerics and the students are on their way up.

I am going to sum up with what I see as having happened since then and then I'll take questions.

On September 9th of this year, just a few weeks ago, front-page New York Times, lead international relations story, first paragraph: "The president has been informed by the intelligence community that Iran is making a sprint forward towards the ability to develop a nuclear weapon, but they have deliberately stopped short of making a bomb." That's exactly the prediction. They are demonstrating to the world, "We have the know-how, but we don't intend to build the bomb."

Now, the risk that they will build a bomb goes up if we don't come to terms with them in the next year. In 2011 it goes up, although still the dominant outcome is no bomb.

We need to be focused on negotiating an inspection regime that has teeth to ensure that they don't go past the demonstration, getting credit in the Middle East, and so forth—"See, we have the technology"—but that they don't go beyond that. There has not been enough discussion about what that inspection regime would look like.

It is obvious, in the aftermath of the June election, that Khamenei's political clout and Ahmadinejad's have declined. Ahmadinejad couldn't even appoint the cabinet he wanted.

Khamenei had never faced any formal, open opposition in the 30 years—it's less than 30 years—in the years since he came to power. It's 30 years since the revolution, 20 years since he came to power.(Khomeini died in 1989.) But he did now—not just from students. Remember, very prominent politicians—Rafsanjani, Khatami, Musabi—opposed his politics. The Qum clerics, the people designed by Khamenei to succeed him, successfully, came out against Khamenei. And Khemenei himself, when the trials started against the students who were demonstrating, said, "I have seen no evidence that they were the agents of foreign powers," undermining the court's case—his own court's—against them, trying to shore up his political position. He is in decline.

In the last couple of weeks the news media have been reporting the rise of Jafari and the Revolutionary Guard into what looks like a dictatorial position with the backing of the bunyads, the moneyed interests, exactly what was predicted back in February publicly.

So I believe what we will see in Iran over the next two years is an erosion of the theocratic thug regime and we will see instead a switch to a more pragmatic military dictatorship, pragmatic in its policy - they want to get the economy going—with the theocrats trying to stay out of politics, and with Khamenei in the next couple of years, probably in two years, stepping down, saying he wants to retire, and being replaced by much more moderate theocratic interests.

I base this not on personal opinion. I have no personal opinions about Iraq. I'm not knowledgeable enough. I base this on the science of predictioneering, using game theory, the assumptions that people are doing what they believe is in their self-interest, to sort out how they will behave and to anticipate the consequences and then to design ways to change their behavior.

I'll take any questions that you have. Thank you.

Questions and Answers QUESTION: Getting to the game theory that you mentioned a couple of times, there are some problems with that in policymaking, whether in the intelligence field or in foreign policy or military. That is the problem of mirror imaging. Do you want to discuss that a little bit?

Sure. I don't actually believe this is a problem. Why not?

First of all, what is essential in good analysis is to not only think about what we want, but of course to put ourselves in the shoes of the other parties, many other parties, and work out how they are looking at the problem. They can't and we can't fundamentally alter what we want to trick the other side.

It is true if we could get everybody to follow a given pattern of behavior we could resolve problems more quickly, not necessarily optimally from one side's point of view or the other side's. It's more manipulable when only one side is using it.

But the basic reasoning process is what we are hard-wired to do. So when we say there's a problem, I go back to that independently audited track record and I observe that one of the things the CIA reports is that when they looked at the expected results produced by their analysts, who were the sole source of data for my model, what they report is that I "hit the bull's eye"—that is, my models do—twice as often as the experts. They report that when the model and the experts disagreed on the outcome, which was about half the time, the model was right, the experts were wrong.

So the question then becomes: What would be a better way of analyzing foreign policy problems?

I am currently on a committee of the National Academy of Sciences whose mission or task—we have been tasked by the Director of Intelligence—is to try to bring more rigorous and transparent methods into the intelligence process. These are some of the transparent methods that I think need to be brought in.

There are limitations, but the limitations are that 10 percent, not the 90 percent. That's the little part, not the big part, where the problems arise.

QUESTION: How do you use your theory to decide whether Israel will bomb Iran?

Excellent question. I've done quite a lot of work on the Israel-Palestine relationship and the Israel-Iran relationship, or lack of relationship.

Let me step outside of the model for a moment and make a technical observation. Israel does not, as far as I know, have a missile capable of delivering an explosive device to the locations of the nuclear facilities in Iran. So to do so they have to use bombers. Their bombers, not by accident, don't have the range. They have to refuel in the air. In order for them to refuel in the air to carry out such a mission, the United States has to provide the refueling capability. And of course, they have to be going slowly over air space of unfriendly places that will want to shoot them down—that's less of a problem. But we would have to be providing the refueling capability.

I think it is extremely unlikely that the Obama Administration will do that. Therefore, I think it is unlikely that the Israelis will take an actual step towards taking out the facilities.

But of course it is in their interest to threaten it. It is in their interest to have the Iranians have to have that as something that they worry about. And so that's a good thing. Threat is good.

QUESTION: I'm interested in your thoughts on the tripartite solution that came out of the Dayton Peace Accords and was applied to the Bosnian situation. It seems to be being applied to Iraq. Do you see Bosnia going back to war? Do you see an Iraqi division into three states or autonomous regions succeeding similarly or failing?

BRUCE BUENO DE MESQUITA: I have another game theory model that is about institutional structure, done with some colleagues in a book called The Logic of Political Survival. It was a very technical body of work that speaks to this.

A three-part, three-state Iraqi solution from an American perspective has some real attractions. From a Sunni perspective, so where's the revenue? They don't have the oil. So in order for them to have a viable state that could withstand threats from a Shia regime, they need to have either some very nasty friends—plenty of those in the neighborhood, so there's that opportunity; but that doesn't seem like a very attractive thing for us—or they have to have an oil deal.

Now, the Iraqi government has for the last several years been talking about devising a policy to share the oil wealth with all Iraqis. So far they have not managed to do this, because of course the oil wealth is the way that you keep yourself in power, by controlling it.

So I don't think that a three-state solution is a viable solution as a sustained effort. It will create three regimes that depend on a very small set of people to keep them in power, each of which will have incentives to engage in rather nasty behavior.

My biggest worry about Iraq is related to this, and that is that if the United States takes out the 50,000 troops that the president wants to not call combat troops—but they obviously will be combat-ready troops—if we take those 50,000 out, then I think what is likely to happen in Iraq is that Hashimi and the Sunni interests—they're ascending in political influence—will become a threat to the Shia regime.

The Iranians will look at that and see that as a threat to their security. There is a consequential significant probability that they would militarily intervene in Iraq to bolster the Shia government, create a puppet Shia government, and make Iraq a vassal of Iran, to put down the Sunnis. If they didn't, they'd end up with a hostile regime.

If we keep 50,000 troops in Iraq, that doesn't happen, at least in my analyses. So what I see as the stabilizing solution to Iraq is keeping those 50,000 troops there.

They have a secondary benefit, which is that in negotiations with Iran, as Iran begins to transit to the sort of government I foresee, this gives us a powerful bargaining chip, because the Iranians look at the troops that we're keeping there and they don't see them as a stabilizing element in Iraq. They say, "Why do they have troops here? They intend to come in and invade us." So they may serve a deterrent purpose.

To me that's where the future lies. Keep the troops in, and that will help to stabilize Iraq. It won't be a democracy, nor it will have the nice pretenses of democracy, but it will be a civil society. We should be accompanying that troop presence with a more efficient mechanism for making sure that Sunni interests are incorporated, particularly in the distribution of oil wealth, so that the pressure to overthrow the regime is diminished.

QUESTION: I was wondering if beyond direct incentives, like the examples you gave where people were paid a certain amount of money, do you think that in more subtle situations people can be convinced that something is in their self-interest, or do you think that if we don't inherently realize it we won't go towards that direction, such as educating girls in the developing world, which is actually in the self-interest of the developed world?

It's a very good question. Don't let me forget to specifically address the educating girls part of it.

In my models people are persuaded either by carrots or by sticks. That is, it doesn't have to be money, but they are persuaded that something is in their interest because their welfare is enhanced. Or they are persuaded to change their behavior because their welfare will be substantially diminished if they don't. They will be coerced, bullied, whatever, or they'll be rewarded in some way.

So now let's take the question of educating girls, a topic addressed at great length, by the way, in The Logic of Political Survival. So you have made a leap—I apologize; I'm a blunt person—that is a very common leap that I believe is a fundamental error. The leap you have made is that it is in the country's interest, and therefore once the leaders come to understand that, they will do it.

Leaders are interested in keeping their jobs. It is not necessarily the case that what is good for the country is good for the leader. No question educating women, educating the whole population, is good for the country.

Generally for non-democratic leaders, it is not good for them. Why? You want a population that is educated enough—literate, basic education, primary education— so that they can be productive workers who can generate tax revenue or expropriation by the government that keeps the leaders comfortable. But you don't want people so well educated that they might create a counter-elite who might become a threat to your political power. The higher the level of education that people have, the more that risk exists.

If we look at the Third World and we see in non-democratic societies who gets to go to secondary school, who gets to go to university, they are constantly filtering out on families loyal to the regime.

So educating women is good for the economy of the society, good for lots of aspects of the society, often not good for the leaders. Therefore they don't do it.

QUESTION: If your predictions in Iran are correct, why is it necessary to put teeth into the people who are going to be doing the inspections? Or is that built into your prediction?

No, it's not built in. Under the current circumstances and the evolving circumstances over the next few years, the domestic political pressures in Iran are sufficient to constrain the regime not to build a bomb.

Go farther out. There is the possibility of those domestic political pressures dissipating. A civilian nuclear energy industry, for example, is very popular in Iran as an objective, which is of course what the Irani government claims is their goal. But down the road that can change.

If that evolves in a less attractive direction—so I had this little comment in the talk that if we don't have a deal in 2011 things start to look a little worse. It's still the dominant prediction, but it looks a little worse. If we go out further, it keeps getting a little worse if we haven't resolved this.

So the inspection regime is to ensure that if the situation on the ground domestically in Iran shifts, if it goes in a bad direction, that we have in place the mechanism to prevent the folks who still want to build a bomb—there are plenty of them—from coming into power.

There could be shocks, exogenous shocks in the world, things that happen that are hard to anticipate. Governments often fall because of things like earthquakes, natural disasters. These are hard to predict. If that happens and it pushes Iran in a bad direction, we want to have in place the regime that is ready to deal with it then. So it's a precautionary measure. It is anticipatory of the possibility that things could go awry.

QUESTION: To what extent does your model look into individual characteristics? By that I mean something like the force of character, the ability to persuade, the charisma of what you identify as the key players in the decisions that are going to be made within a particular country or institution.

BRUCE BUENO DE MESQUITA: At one level, the model is that everybody is just a cold, calculating s.o.b. The distribution of the data, however, reflects tradeoffs that different people are making—that's what shapes the landscape of the data initially—between, as I put it, wanting to get credit for putting an agreement together and wanting to stick to their guns.

We sometimes look at people who put agreements together as having persuaded through charisma. Maybe yes. Maybe no.

The model looks explicitly at persuasive ability. That's what's going on in the game, is: Can I persuade and how do I persuade? But it doesn't look explicitly at personality or emotions.

It is a model. It's a bundle of equations. We should never confuse a model with all the details of reality. We should never allow a model to replace good judgment. But neither should we ignore a rigorous transparent methodology that is accurate 90 percent of the time in order to see if we can capture the 10 percent cases at the expense of the 90.

So what a model in my view should be used for—this is my ongoing mantra to the intelligence community—is when the model's results agree with you, great, you feel more confident in your result; when the model's results disagree with you—and I'm going to give you a concrete example in a moment—they should give you pause to step back and say, "Okay, why? What have I assumed that the model does not assume, or vice versa?"

Typically, in the intelligence setting the data going in are from the intelligence community. It is what they know about the problem. Now, maybe they know other things that shape a different decision. But there should be a discussion generated by this thing where we know how the reasoning process works—the model produced an answer different from whatever the method was that went on in your head to produce an answer. Explain to me how you got there. I can explain how we got there. Let's discuss it.

I did a briefing on Iran, the details of which I can't go into, for the National Intelligence Council on August 21, 2007. This is not a date that's likely to mean anything to any of you, but it was a very important date. My briefing occurred immediately after the National Intelligence Council had produced its then-national intelligence estimate on Iran's nuclear program.

My briefing, based on model results, fundamentally disagreed with the opinions, the assessment, the estimates that were contained in the national intelligence estimate.

The person who was hosting my talk is the person who is responsible for nuclear proliferation matters in the National Intelligence Council. He gave me an incredibly hard time. He disagreed with my conclusions. It was his data, or his group's data. We argued quite vigorously.

At the end of the day—it was rather acrimonious, kind of unpleasant—I went home, disappointed that they weren't paying attention. I was pretty confident the results were right.

Two days later he sent me an email. I had been pretty tough on him. I had said to him, "Look, I have explained to you the logic that got me to this answer. I'm not hearing on your side a coherent, logical explanation of how you got there. What I'm hearing is that this is what you believe, this is your view, but not how you arrived at it."

He sent me an email two days later: "I go to a lot of briefings. The person gives a talk. I thank them. I shake their hand. I go away. I never give them another thought. I've been awake for the last two nights because of you. I cannot dismiss what you said about me. I cannot dismiss that I just have an opinion and I can't defend it. I'm going to go back and look at this."

This is the person who in October 2007 wrote what was then a dramatic reversal that made the newspapers—the national intelligence estimate, in which it was revealed that the evidence was that, in their intelligence estimates, Iran had ceased a nuclear weapons ambition a couple years before.

Now, it wasn't that suddenly he believed me—why would he?—but the model results compelled him and compelled his colleagues to step back and say, "How did we come to this conclusion? There's lots of evidence contrary to it. How do we defend this conclusion?" And they changed their conclusion. That's what a model should be used for.

You made a passing reference to the unpredictable earthquake or tsunami event, a black swan event. As a general rule, is the black swan event a validating element in your model or is it a general exception?

It's neither. So I have learned from the errors in modeling in the past. In the apprentice version I've turned this off, although you'll see that it's there.

One of the things that the model allows me to do, and that I routinely do—so I do an assessment, and then I have little switches on the model where I can say: Okay, I want the data on salience or on position or on clout—or on any of the variables or on all of the variables—to be randomly shocked. I want them to suddenly randomly swing to bigger values, smaller values, wherever they go randomly. I do enough of those. What I want to simulate is how robust is the result against unanticipated changes in the world.

So for a black swan event it then becomes: how big does it have to be? It is something that now is systematically modeled.

I did a paper a few years ago at the suggestion of John Gaddis, the distinguished historian, who was very skeptical of the sort of work I'm doing. John, before he went to Yale, when he was at Ohio University in Athens, Ohio, had done a paper critiquing international relations theory. He was informed that he had not paid attention to this body of work. So he looked at it.

He invited me to Athens to spend a week with him and his students, attending his seminars, doing a public lecture, and so forth, and basically providing the opportunity for his students and for him to beat me up for a week. That was the plan. And I was to make a forecast. I was to do an analysis in front of them.

I allowed them to choose whatever issue they wanted. You'll be able to pin down the year—I don't remember exactly whether it was 1993 or 1994. He had some students who knew a lot about baseball. They wanted to know if there was going to be a baseball strike, and if there was, would there be a World Series, and so on and so forth.

So I did that in front of them. They gave me the data. I plugged it in, projecting it. They could see I wasn't sneaking off and asking somebody. I made very detailed predictions, all of which turned out to be correct.

John suggested—okay, he believed actually this might be science—that I take data that any decision maker could have known in the late 1940s and run the model forward with the shocks which I have just introduced and see if we could have predicted the end of the Cold War.

I did a paper, which was published in 1998, in which 78 percent of the simulations produce a peaceful victory in the Cold War by the United States, 11 percent produce a victory by the Soviet Union, and 11 percent produce the Cold War continuing over the span that I looked at, a little bit more than 50 years. So the density of probability was overwhelmingly the United States was going to win the Cold War peacefully.

None of the data were updated. The data were all taken from 1948. Anybody could have known these numbers in 1948. Nothing was updated by events that happened in the actual world. I just ran it forward with shocks. It was what I describe as an emergent property.

That is a way of addressing the black swan problem. It is see how big a shock does it take to change the course.

JOANNE MYERS: Sorry. Our time has come to an end. The National Intelligence Council may have been disappointed, but we certainly weren't. So thank you very much.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

APR 11, 2024 Podcast

The Ubiquity of An Aging Global Elite, with Jon Emont

"Wall Street Journal" reporter Jon Emont joins "The Doorstep" to discuss the systems and structures that keep aging leaders in power in autocracies and democracies.

APR 9, 2024 Video

Algorithms of War: The Use of AI in Armed Conflict

From Gaza to Ukraine, the military applications of AI are fundamentally reshaping the ethics of war. How should policymakers navigate AI’s inherent trade-offs?

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation