ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.
This week's podcast is with Tom Parker. He is the author of the book Avoiding the Terrorist Trap: Why Respect for Human Rights is the Key to Defeating Terrorism.
After a brief overview of the book and the current state of counterterrorism, Tom and I focused on privacy and surveillance, with some discussion of facial recognition software. As new technologies become more available to investigators, there needs to be new discussions about how they can be helpful in fighting terrorism, while also respecting the rights of suspects and the general public. A central theme of our talk and the book is that governments often overreact to terrorist attacks, which leads to a dangerous cycle.
For a full transcript and more resources on counterterrorism and human rights, you can go to carnegiecouncil.org. And for more on all aspects of terrorism, including its history, social science, and root causes, I urge you to check out Avoiding the Terrorist Trap.
For now, calling in from North Carolina, here's my talk with Tom Parker.
Thank you so much for doing this podcast.
TOM PARKER: My pleasure. Happy to be with you.
ALEX WOODSON: Just to get started, you're the author of Avoiding the Terrorist Trap: Why Respect for Human Rights is the Key to Defeating Terrorism. Why did you decide to write this book? What need does this book fill?
TOM PARKER: That's quite a long story. The immediate genesis of the book came from serving in the Coalition Provisional Authority in 2003–2004 and seeing firsthand some of the mistakes we were making as we went about our daily business, and as I was hearing Donald Rumsfeld and other Pentagon leaders talk about how the nexus, the heart of the insurgency was dead-ended from Saddam's regime. What we were seeing on a daily basis was the hearts and minds of the Iraqi people slipping away from us because of the way we were behaving, because of the experience they were having on the micro level, just going through checkpoints or interacting with British or American troops.
It was very clear that, while I don't think anybody in Iraq was particularly pleased to see Western forces in 2003, they were pleased for the most part to see the back of Saddam. At least when I first got there, I got the sense that most people were willing to give us the benefit of the doubt, and that evaporated very, very quickly, and it evaporated very, very quickly because of the things we failed to do.
When I came back from Iraq, I was lucky enough to be offered the opportunity by Yale to go and spend six months as a research fellow. I immediately set to work doing research and trying to see if I could contextualize the experiences that I had and see what academic literature existed out there and what case studies existed out there that could shed some light on the mistakes that we made and perhaps point the way to avoiding mistakes like that in the future.
That kind of mission just got more and more relevant to the war on terror—things like Abu Ghraib and the CIA black sites program and the use of drones. It just seemed like we were doubling down on our mistakes and learning none of the lessons that were there for anybody to see. That was really what drove me to write the book.
ALEX WOODSON: That leads into some of what I'd like to speak about today.
I want to begin with a quote that is in the beginning of Part Three of your book, the subtitle of which is "Countering Terrorism within a Human Rights Framework." It's a quote from Benjamin Netanyahu that I think really sums up a lot of what's written about it and what we can talk about today, which is: "If democratic governments do not fight terrorism with the means available to them, they endanger their citizenry; if they do, they appear to endanger the very freedoms which they are charged to protect."
Do you agree with that quote? You go into lots of different examples about why or why not you might disagree or might agree with it, but what about that quote made you want to put that into the book and what do you think about it seeing it now?
TOM PARKER: What I think is important about that quote is that it makes instinctive sense to most people coming to this subject for the first time. Most people think when you're faced by a ruthless enemy, the famous Dick Cheney quote: "You have to turn to the dark side, you have to take the gloves off, you have to get down in the gutter with them." That's just a misconception is what it boils down to.
Let me give you a perspective of where I'm coming from on this. My background—I'm not a human rights guy. I was a counterterrorist official, I'm an intelligence officer by training, and an investigator by background. I did not come at this at all from the position that human rights have to be right. I came to this from the position of what works—what do we see working and what do we see going wrong?
What we see in just about every counterterrorism campaign is the state initially overreacting massively. There's an Irish academic named Louise Richardson—dean of Oxford University now, I believe—who wrote a very good book called What Terrorists Want: Understanding the Enemy, Containing the Threat. She makes the point that there is almost a pathology of state overreaction. You look at every single case study, and the immediate response of a government is going to be to hit back hard. That's really true of democracies because they have to listen to their people. The people are scared, they want the government to get tough, they want the government to stop the attacks happening. As a result, you get this kind of almost "security theatre"—Benjamin Friedman coined the term—of looking tough.
For me, the emblematic example of that is something from the United Kingdom, which you can tell from my accent is where I'm from. Every time we have a heightened alert in the United Kingdom, what you'll see is there's nearly always footage on the BBC of a Warrior armored personnel carrier being parked outside the terminals at Heathrow Airport. These things have big chain guns on them, they look very impressive, but the reality is if that machine fires any rounds from its gun, it's going to tear through the entire terminal building and kill far more innocent people that it is going to kill terrorists. It's just entirely the wrong weapon to use for counterterrorism and particularly to use in a crowded space. And it's not there to be used, it's used for theater; it's there to look strong, it's there to look tough.
It's that kind of thinking that informs a huge amount of counterterrorism, but the reality is you're giving terrorism too much respect when you respond like that.
The famous Maoist metaphor for terrorism is the "war of the flea." The war of the flea represents the Goliath that is the state and the flea is the guerrilla or the terrorist. But what's often misunderstood about that metaphor is that Mao's not talking about the fleabites overwhelming the dog. If you read the metaphor carefully, what he's actually saying is it's not the fleabites that cause the dog to collapse, it's the dog scratching at the fleabites and the wounds becoming infected that cause the dog to weaken and collapse; that is, it's the self-inflicted wound. The line that I use in the book—and I think it is the best way of thinking about this—the war of the flea is actually all about the dog; it's all about how the dog reacts to it.
Terrorism is a contingent political tactic. Terrorist groups are marginal by their very nature. They don't pose an existential threat to the state. They don't really pose much of a security threat to the state. Most terrorist organizations consist of a handful of people. Even if they consist of several thousand people, even if they consist of 10,000 people, they're massively outnumbered by state authorities. They very rarely have heavy weapons. They very rarely have the ability to hold territory. When they're dumb enough to try to hold territory, the typically get overwhelmed very, very quickly by organized military forces, just as happened to the Islamic State of Iraq and Syria ultimately in Iraq and in Syria, and just like happened to the Pakistani Taliban in the Swat Valley. If they try to hold territory, they're not a match for conventional forces.
What terrorism is all about is provoking the state into undertaking actions that support the terrorist cause. There's a famous article written about 50 years ago that describes terrorism as "political jiu-jitsu." That's whacking the nail on the head. Terrorism is all about turning the strength of the state on itself and making the state commit repressive acts that ultimately drive people into the arms of the terrorist group.
Not everybody in society, but terrorists have constituents. They are political actors. They're essentially—to steal Clausewitz's phrase —the "continuation of politics by other means." They're trying to appeal to constituents. They're also trying to polarize society. They're trying to force people to choose a side, and they're trying to force people to choose their side.
When a state actually acts in a way that is almost symbiotic to what the terrorists are trying to achieve, you're playing their game, and that's what we see states doing again and again and again and again.
ALEX WOODSON: Have you seen this get worse over the last 20 years, become more of a problem for states reacting too strongly to terrorism, states committing abuses to stop terrorism? I think the casual observer might say, "Yes, it has." After 9/11 we see what happened in the Iraq War. But, as you write in the book, terrorism has been going on for a lot longer than the last 20 years. How have you seen this change in the last couple of decades?
TOM PARKER: I don't know that I would say it has gotten worse as such. You can find plenty of examples of states overreacting to terrorist threats that go back a hundred years. The impulse to react like that has been around for a very long time.
What I think has changed in the last 20 or 30 years is the state's ability to project force across the globe with things like drones. It changes the equation. A drone enables you to deploy force in an incredibly remote area with very minimal risk to your own men and materiel. That makes it much easier, of course, to pursue a more kinetic response to terrorism.
But the reality is—and we saw this with the Obama administration—this is not a bipartisan issue. As you'll see in the book, I'm just as critical of left-wing governments as I am of right-wing governments. This is about the misuse of tools.
The Obama administration massively escalated the use of drones. They were very disingenuous about who was actually being killed in the drone strikes. If you remember, John Brennan came out very early on in the Obama administration and he claimed to Americans that the strikes were so "surgical"—always a word to look out for when you hear people talking about "surgical" strikes because the reality is battlefields are messy and very few strikes are surgical. They were claiming they were killing 40 militants for every innocent civilian they were killing. It eventually turned out that that figure was derived from the fact that they considered every male they killed to be a combatant if they were between the ages of 18 and 65.
That flies in the face of one of the fundamental principles of the law of armed conflict, which is called the doctrine of distinction: you actually have an obligation as a combatant to try to distinguish between combatants and civilians; you can't just call everybody on the other side who is male a combatant. Particularly when you're thinking about an area where the enemy is operating clandestinely, it's a claim you just can't substantiate.
Of course, as we have learned more about the drone program in Pakistan, we know the figures are very disputed. There is not often hard out-and-out data on this, but there is some very good reporting done by a variety of independent journalistic sources—the Bureau of Investigative Journalism, for example—that are suggesting that the number of innocent casualties is well over 1,000, possibly over 2,000; and in fact, far from 40:1, the hit ratio is probably nowhere near even as good as 1:2. Well, every time you're hitting the wrong person you're killing innocent people and you're probably alienating communities from your point of view.
A great example of this—you may remember Humam al-Balawi, who was the suicide bomber in Khost, who walked into an American base, Camp Chapman, and detonated a suicide vest, killing a number of Central Intelligence Agency (CIA) personnel and CIA paramilitary. He had been providing targeting intelligence, and he had been under the control of Baitullah Mehsud while he was providing that targeting intelligence that the CIA was using to carry out drone strikes. Well, you can bet your bottom dollar that those drone strikes were an invaluable asset to the Pakistani Taliban and also al-Qaeda; they were taking out targets that actually al-Qaeda and the Pakistani Taliban wanted to get rid of, and we were counting those as "successes" at the time.
ALEX WOODSON: Another issue that you write about in your book is something that we have been following at Carnegie Council for the past year or so, which is privacy and surveillance. That's another issue where states might need to surveil different communities and different people, but it can also violate human rights in some cases.
How do you see that playing into some of these issues as well? Do things like privacy and surveillance issues lead to people becoming more inspired to become a terrorist? Does it alienate people in some of these communities? And how we can we alleviate some of that alienation?
TOM PARKER: That's a great question. There's quite a lot to unpack there. I'm going to start with a little bit of a tangent and then work back to the central issue of privacy.
Your sub-question of whether or not this can alienate communities—the answer to that is absolutely. In fact, there's a concept often used in the academic literature called "suspect communities." The way that the state focuses surveillance on a particular group can in and of itself create a pathology of victimization, and a sense of everybody being treated as a suspect ultimately alienates and polarizes a community to the point that they're going to be very suspicious—whether they have any sympathy or not for a particular terrorist or militant group—about having anything to do with the authorities because they're used to being treated as suspects.
In any sort of intrusive or targeted or discriminatory profiling approach adopted by a government to a particular terrorist threat, there's an inherent danger of creating suspect communities and marginalizing groups, who ultimately are the people you probably need more than anybody else to be providing you with intelligence leads. So yes, I think absolutely that's a serious, serious concern.
To return to the issue of privacy, that's a really, really important but very slippery concept, particularly in international law. As you mentioned, there is a recognized human right to privacy. Article 17 of the International Covenant on Civil and Political Rights guarantees that no one shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home, or correspondence, or for that matter to unlawful attacks on their reputation. That sounds like a really strong protection.
The problem is, in international law there is not really a definition of what privacy actually is, and where there have been cases where the courts have explored this, they have been very, very careful to keep that definition as loose as possible so that those protections aren't tied down. So the margins are a little fuzzy, and there's a little bit of room for maneuver. While there is a right to privacy that is actually quite well established in law, what is meant by that right to privacy is actually not very well established in law.
There's a great summary. There's an American jurist called Thomas Cooley from the 19th century, and he has this great phrase, that essentially the essence of privacy is "the right to be left alone." I think that's really quite a good way of conceptualizing it. We all have a right to go about our business without state interference, absent any predicate for the state to start showing an interest in us.
Within the context of most investigative activity, there is this concept of "reasonable grounds for suspicion"—some sort of criminal predicate or suggestion of criminal activity that has to exist, that has to be objective in nature, and cannot be derived from any sort of discriminatory prejudice—that triggers the need for the state to become involved.
International human rights law recognizes that terrorism itself poses a massive human rights threat and that states have the right to protect life and the obligation to protect life. That means that states have an obligation to protect their citizens from terrorism. All this is understood and accepted within the context of international human rights law. The trick is finding out where the lines are. In the context of privacy, that has been quite a tricky issue.
In the past the investigative techniques that we typically refer to collectively as "special investigation techniques" (SITs)—the sorts of things that your listeners would associate with almost intelligence activity or intelligence community, so we're talking about things like running human assets, penetration agents, eavesdropping devices, covert surveillance, and those kinds of techniques. There has been an extralegal regime or an additional legal regime that somewhat restricts their use, and this is the idea that police and state agencies shouldn't be using those techniques unless there's a strict need to do so, that they are enshrined in law as a power the state has, and how they are used by the states is clearly explained in law as well so people have a reasonable expectation or a reasonable opportunity to know when they might foreseeably find themselves getting the attention of the state.
Typically, we talk about SITs being used basically within three important principles:
1) Subsidiarity, which is the idea that if there is another, less intrusive way to get that information or a simpler technique that you can use, you should be using that;
2) Specificity. You can only use the information, the intelligence you gain from using this technique, for the task or the objective for which you gathered it. The idea is if you happen across something completely different in the course of a counterterrorism investigation, you would not be able to just take that information and use it for a different investigation;
3) Proportionality. You shouldn't be using a sledgehammer to crack a nut. You should be reserving these techniques for the most serious cases.
Serendipitously, in the past those principles were also balanced by practicality, which is that these techniques were hugely labor-intensive. If you listen to an eavesdropping device, somebody has to actually transcribe the tapes; that takes a lot of time. So, there was a limited ability of human resources that were available to law enforcement and the intelligence community to actually process the data that was coming in from these special investigation techniques, and this had a limiting impact on how they were used. It was a highly valuable resource that was used sparingly for the most important cases.
That is beginning to change now with the developments of new technologies, where you would no longer need a human being necessarily in the loop. Computers can do a lot of the analytical—with a very small A, and I use that term incredibly loosely, but computers can do some of the let's call it "data sifting" that would have been done by a human being before, and that means, of course, you can do more.
That's a very worrying development. The technology is far ahead of where the law is, so we find ourselves being overtaken by facts on the ground as new technologies are invented and deployed but haven't been tested in court.
An example that always springs to mind for me is that in the United Kingdom we have something called the automatic license plate reading system. What that does is basically you have a camera sitting on the dashboard of a police vehicle, and as the police vehicle drives along the road the camera is automatically scanning all the license plates coming toward it and doing searches. It's this thing called the automatic number plate recognition database. If a car in that database goes past with an alert on it, that will flag up the suspicious nature of that vehicle immediately to the police officer driving the vehicle.
You could argue that that's a somewhat unreasonable search. These are the sorts of technologies that haven't really been tested, and they're becoming more and more ubiquitous because it very quickly doesn't just become a question of a license plate reader on a dashboard of a police car; it could be every traffic camera is also doing this. The technology, the software can be rolled out for more and more electronic eyes around the city, and suddenly you've got a surveillance net that is incredibly intrusive.
In this instance you're only looking for criminal predicates. It's only flagging up people for whom there might be an outstanding police query. But you can very quickly see how this could be gathering information that could be easily abused.
ALEX WOODSON: Another new technology that you touch on in the book is facial recognition software. As that becomes more prevalent and perfected, what are the issues that you're concerned about when it comes to facial recognition software in terms of fighting terrorism and doing counterterrorism operations?
TOM PARKER: That's a really, really interesting area as well. One of the things I should say is that this is a very rapidly evolving new technology. Even since I have written the book there have been developments in this field. Let's start with what is, and then we can talk about what will be.
The technology that exists at the moment is still deeply flawed, and there have been some very interesting reports that have come out in the last six months, most notably from the United Kingdom and the United States, looking at how effective the different facial recognition systems that are currently being used actually are.
The National Institute of Standards and Technology in the United States did a study, actually this month, where they looked at facial recognition software systems being used in the United States by agencies like the Federal Bureau of Investigation (FBI). They found that there were some pretty significant flaws, one of which is these systems are up to 100 times more likely to misidentify Asian, Native American, and African American citizens rather than white citizens, and that women are more likely to be misidentified than men. The FBI has conducted 390,000 facial recognition searches of state and federal databases since 2011. If you think of how many false positives this may have thrown up, you start to get a sense of the scale of the problem.
If you get a false positive as you're passing through an airport, you might miss your flight, you might find yourself being interrogated, you might find yourself having a very tense police encounter. You could imagine a situation where you have been identified falsely as perhaps being a terrorist, and then suddenly that police encounter might become very dangerous, particularly if the police think they're responding to a terrorist suspect. They might draw their weapons. You can very quickly imagine a scenario where a misidentification could escalate very quickly to a situation where someone's life is put at risk. That's pretty alarming.
In the United Kingdom, where the police have been using facial recognition systems in a couple of force areas—South Wales and the Metropolitan Police spring to mind—there's a report that came out in the summer from the University of Essex Human Rights Centre, which found that the systems used by the Metropolitan Police in London were about 81 percent inaccurate, which means basically that in the vast majority of cases it flagged out faces to the police that were not on the wanted list. That's pretty alarming.
They didn't look at every single system the police were using. They looked at about six of the ten systems that were in use, and the actual sample size is quite small. But nevertheless, again you can see the scope for tragic outcomes, quite apart from the possibility—and to go back to the concept you raised earlier about suspect communities and particular groups and ethnic minorities being unreasonably targeted and perhaps that driving a degree of radicalization—again you could see how this could very quickly become the case, particularly if it's in many of the cases a false positive that ends up driving the encounter with law enforcement.
I think the situation as it is at the moment is quite troubling. The technology just isn't good enough, and it's being used in a way that can impact citizens and can result in police encounters that shouldn't have taken place: The person is doing absolutely nothing wrong, they're going about their business, and they're being materially disadvantaged as a result.
That can literally happen to anyone. I've had the same problems myself coming back into the United States because there is a similar name to mine in the border protection database, and for about three years every time I came into the country I got pulled aside, and we had to wait for an hour and a half while they went through the various steps. Typically, it's about waiting for somebody to become available before they could visually verify I wasn't the person they were looking for, and they would let me go. On several occasions I missed a flight.
That can happen. I'm pretty relaxed about that as a person myself, but what I'm not so relaxed about is it happening five times. If the system is so inflexible that you can't simply put a data point in the record—"Oh, this passport number is the not the person we're looking for"—and the database being used by the federal agency in this instance wasn't sophisticated enough for the guys doing the checks to do that. That's again worrying, when you're coupling modern new technology to fairly old, outdated technology in terms of databases. The situation is just not likely to end well.
When we look at the big cases where facial recognition has been used, like the Boston Marathon case, the Marathon bombers, it was used, and it didn't work. One of the bombers was in the database; they did facial recognition software, and his face didn't pop up. It's certainly, at this point in time, not the all-seeing eye that it could become.
Then you take a step back and look into the future and, assuming the technology can be perfected, then you're looking at a totally different scenario where then we have to start asking ourselves about what kind of surveillance state we would find ourselves living in, where you're surveilled everywhere you go. When you think of the number of cameras—I forget the figure for the United Kingdom, but I think in London they reckoned that there was a camera for every four citizens, something like that.
You just have to go around your daily life looking for cameras, and you will start to see them everywhere: There's a camera when you take money out of the ATM, there's a camera when you park your car in a car park, there're cameras on the roads, there're cameras on some of the vehicles you pass, there're cameras on the corners of most buildings. All of these are privately held at the moment, but again it's not too hard to imagine a network system where the degree of surveillance and the data that is being recorded and aggregated on databases could be quite extreme.
You've seen an intimation of that world in China, where the Chinese have been developing this kind of technology, and they've been marrying it to concepts like Social Credit. You could imagine very quickly a situation where the computer is automatically registering your presence at a protest, it's automatically registering your presence at a medical facility, it's automatically registering your presence and what you purchase in stores. You can suddenly see the potential for the state to gain an incredible degree of knowledge about your everyday life, and that world is not very, very far away.
That future is very much at odds with this concept of the right to being left alone, which the concept of privacy entitles us to. That's the tension that's approaching. As I say, at the moment there is not a great deal of law that has developed to really answer how we navigate these kinds of challenges.
ALEX WOODSON: With these dilemmas in mind—we've talked about targeted killing and drones, privacy and surveillance, and how those can cause issues—what are some strategies that you've seen that have worked in counterterrorism? What are some examples that you can speak about that you've seen that have actually worked and also shown a respect for human rights?
TOM PARKER: This is a difficult question. It's almost a slightly unhelpful game to play. There are no—and this is a tired phrase and you will have heard it many times—"magic bullets." Counterterrorism is a grind. It's no easier to defeat terrorism than it is to defeat crime.
If there is something causing terrorism—and terrorism doesn't drop from the sky; it's usually a response to a set of circumstances, structural and personal but often structural. They may be political, they may be socioeconomic, they could be driven by ethnic tensions, but there are going to be underlying drivers for this behavior. If those underlying drivers aren't addressed, you're going to have a very hard time eliminating the behavior because what you're ultimately doing is treating the symptoms; you're not treating the cause of the disease. I don't like that metaphor, but it is one that Che Guevara used, and I think it's quite apposite. We focus way too much on the symptoms and way too little on the actual underlying causes.
Quite a good way of thinking about it—and this is a phrase that actually was in use from a British Army officer who was involved in running the British counterterrorism campaign or counterinsurgency campaign against Ethniki Organosis Kyprion Agoniston (EOKA), which was the Greek Cypriot nationalist organization that was active against the British in the late 1950s. This chap, Sir John Harding, was a field marshal and the British governor of Cyprus. He wrote in a report where he was coming under a great deal of pressure from London for failing to stamp out EOKA's activities: "The most we can hope for is a slowly rising curve of success in the aggregate, and the actual course of the graph is bound to be erratic."
That's a very wordy and somewhat tedious phrase, but it's exactly right. With good counterterrorism, you're going to have a slowly rising curve of success over time and one that can easily be damaged or reversed when you make mistakes. So, you have to look at a long-term body of practices that are going to grind away at getting results without alienating people.
There's the famous Donald Rumsfeld quote. Rumsfeld is far from somebody who I admire in many respects, but he's certainly no fool, and he had some very good comments during the conflict in Iraq and Afghanistan, one of which is, "We need to understand whether we're deterring and capturing more terrorists than we're making through our course of action." That's spot on. Your counterterrorist activity has to be pouring water on the fire. It has to be trying to diminish the reasons why people are flocking to the terrorist flag, and if you're not doing that, you're making things worse.
Most people involved in terrorist actions do understand that they're breaking the law. They do expect consequences for their actions. They also, weirdly, expect the state to play by its own rules. And when the state doesn't, that becomes a huge frame amplification for their narrative.
You look at a lot of terrorist organizations—and you see this is in terrorist literature going back decades and decades—a very common phrase used by the terrorists is: "We're trying to rip the mask from the state. We're trying to expose the fascist regime that hides behind democracy." You see that a lot particularly in the Marxist terrorist groups in the 1960s and 1970s, but you also see it now with groups like al-Qaeda and some of the rhetoric that they use. They're trying to prove that particularly democratic states are not as liberal or not as fair as they like to pretend they are. So, when you act in a way that underscores that, you are giving a propaganda advantage to the enemy.
David Petraeus has a famous phrase about Abu Ghraib where he says: "Abu Ghraib and situations like it are nonbiodegradable. Once you make a mistake like that, the enemy gets to hit you with it like a stick, over and over and over again."
We gained nothing from Abu Ghraib or the black sites. If you read the Senate Select Intelligence Committee's report about the CIA's use of torture, there was very little intelligence benefit that came out of that program but the damage done to America's reputation around the world was pretty catastrophic.
The images of Abu Ghraib crop up again and again in terrorist propaganda. You can read narratives and biographies of terrorists who will talk about how seeing that on television was one of the reasons why they decided to become involved in terror. One of the Charlie Hebdo attackers, before he killed himself, was talking to a French journalist, and the journalist asked him why he was doing what he was doing, and he said, "Because of Abu Ghraib and all that."
You don't act in a vacuum. The things you do, the actions you take, they have consequences, and they can either, as I say, pour water on a fire or they can pour gasoline on it. A lot of the more aggressive things that make us feel better because we feel like we're imposing order on chaos, we're taking control, actually what they end up doing is making things worse, polarizing society further and driving more and more people into the arms of the terrorists.