JOANNE MYERS: Good morning, everyone. I'm Joanne Myers, director of Public Affairs programs here at the Carnegie Council, and on behalf of the Carnegie Council I not only want to wish you all a very happy New Year but to thank you for beginning your day with us.
To begin the New Year, we are delighted to welcome the widely-acclaimed New York Times journalist and best-selling author, David E. Sanger. As a national security correspondent for the Times, David has been a diligent observer of cyberspace issues for years. He will be discussing his latest book, entitled The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age, and it will be available for you to purchase at the end of the hour today.
We are excited to begin 2019 with a discussion on cyber conflict. Although it is a fairly recent phenomenon, it is one that has dramatically transformed geopolitics and the nature of warfare, and will continue to do so for years to come.
While individual hackers and organized crime organizations have targeted businesses for quite some time, cyberattacks have rarely created political risk, but they do now. Whether we are talking about Chinese theft of personal data, North Korea's financially motivated attacks on American companies, or Russia's interference in the 2016 election, cyberweapons have become the weapon of choice for democracies, dictators, and terrorists. It's easy to understand why. As you will soon learn, they are cheap to develop, easy to hide and to deny, but most importantly they leave opponents uncertain about where the attack came from and thus where to fire back.
The Perfect Weapon is the inside story of how the rise of cyberweapons transformed global power. In it, David focuses on the emergence of cyberwarfare as a primary way for states to attack and undermine each other. It reveals a world coming face-to-face with the perils of technological revolution.
Whether it's computers, phones, transportation, electrical power grids, water supplies, or global navigation and communication satellites, it is a world in which virtually everything we rely on is interconnected and vulnerable to disruption if not destruction. Cyberweapons are said to be the number-one threat to our national security, and yet, according to our speaker, we are unprepared.
Why have we been so slow to adapt to this new reality? How can we defend our security, economy, democracy, and privacy from cyberattacks? For the answers, please join me in giving a warm welcome to one of the best-known voices in American journalism writing on these issues, our guest today, David Sanger. Thank you so much for joining us.
DAVID SANGER: Well, thank you, Joanne, for that, and thank you all for coming out on a cold morning and joining us all here this morning. It's great to be back at Carnegie. Great to see my friend and college classmate Joel, who has guided you guys so well through all of this and so many other friends who are here.
As Joanne suggested, when you think about the collection of threats that face the United States sometimes it's hard to prioritize what's a larger threat and what's a smaller one. It seems particularly hard to prioritize these days in Washington where you might think that the construction of an old-fashioned wall across the Southern border would actually do much for the number-one threat in the United States.
So maybe a good place to sort of situate where we are in cyber is to think about the intelligence community's own annual assessment about where threats are and which ones you need to worry about. We get this from an annual report that the intelligence community has to give to Congress. It hates giving this to Congress because it's an unclassified document, and it actually reveals a little bit about what they're thinking and when. You know how the intelligence community feels about that, and we'll get into that some more when we discuss why there's so much secrecy around cyber.
After 9/11, not surprisingly, terrorism was listed as the number-one threat facing the United States and that happened through a good number of years. In fact, if you go into the 2007 assessment of threats facing the United States, which John Negroponte, who was then the director of national intelligence, gave to Congress, it starts with terrorism as you would imagine. It's only six years after 9/11.
If you want to go read the cyber section of the report—it's usually about a 20-25 page report—it doesn't take you very long. The word "cyber" does not appear in the 2007 threat assessment.
A few weeks after that threat assessment came out, Steve Jobs came along and held up to the world for the first time the first iPhone. Actually, if you look at a first iPhone now—and it shouldn't be very hard because it's pushed to the back of your bedside table; if you can find it, you can find it there—it feels pretty blocky and heavy.
But for the purposes of the threat assessment it changed everything because for the first time computing went truly mobile. In other words, you didn't have to worry anymore if you were an attacker, particularly a foreign attacker, about how you were going to get inside the Pentagon or inside the White House or inside McDonnell-Douglas or inside any other defense contractor because the employees kindly enough were going to walk the computer in for you. So if you could simply get inside that device, the device was going to take you into the network that you needed to attack.
The same is true in many ways in your home. When you think about it, in 2007 when that report came out, I'd be surprised if you had more than one or two things connected to the Internet, maybe a laptop computer, certainly a desktop computer.
Today you think about it, and you've got a Fitbit, an Alexa attached, probably any smart TV that you've got, your car parked out in the driveway or the garage. Some people have Internet-connected refrigerators these days. I haven't quite figured out what I was supposed to do with one of those. I guess if it told me to eat less that would be useful. But that explosion of the number of devices that we have connected to the Internet—and it's worse at the office than it is at home—tells you why this has exploded as a threat since because every one of those devices is making your life easier, and every one of them is a new attack surface for anybody trying to get into a network.
And they're all over the map in how good the security is on those devices. The worst are those little Chinese-made video cameras that you can put out in front of your door, which turn out come with a single built-in password that you can't change. It's really a great device.
In fact, there was a major cyberattack two years ago in which the attacker massed the power of millions of these all around the world unbeknownst to the people who had put them up on their houses and front doors and so forth, and used them all to attack a single company that runs a lot of Internet operations called Akamai. It's right off the Massachusetts Institute of Technology (MIT) campus. It gives you a sense of how something that seems pretty innocent and easy enables you to do this.
In the world of cyber, what has happened is that our concept of what you use cyber for and how states can use it has morphed as quickly as these devices have expanded. Initially, people looked at cyber, and they viewed this merely as a surveillance device, right? People invented the telephone; the FBI and then the intelligence agencies figured out how to intercept telephone conversations. Before that, people invented the U.S. Postal Service, and they figured out how to intercept and open up the mail.
Cyber was no different. It was simply a question at that time of whether or not with legal process inside the United States, without legal process if you're in an American intelligence agency outside the United States, you would simply be able to intercept messages. And surveillance is still a lot of what's going on.
The Perfect Weapon is not a book about surveillance. Surveillance is in common use, but it's not especially interesting.
But cyber then enabled you to begin to do some other things. Once you could get into the networks and see this data, then you were suddenly able to get into those networks and manipulate this data. And once you can manipulate data, you can do all kinds of havoc.
We've seen some, particularly in the world of theft, when the North Koreans got into the Society for Worldwide Interbank Financial Telecommunication (SWIFT) system, which does international clearance transactions, and was able to go alter the data about where SWIFT transactions would go. Suddenly the Bangladeshi central bank was sending $1 billion to an account nobody ever heard of in the Philippines. It's a pretty good bank robbery if you don't actually have to enter the bank.
As it turned out, the North Koreans only got $81 million out of this because in one of the orders they misspelled the word "foundation" as "fandation," and a very alert young woman who works at the New York Fed and saw the order coming through thought this was strange—this was pretty good because she wasn't a native English speaker; she was German—and went back and looked and stopped the transactions after the $81 million had gotten through but before the next $940 million had made it.
The North Koreans learned a lesson from that. It's just like your mother always said: Spell-check everything.
You can imagine other great uses of manipulating data. Somebody got into the database of the U.S. military—forget re-targeting nuclear weapons, which would probably be difficult. All you had to do cause great havoc is change the blood types of everybody listed in the medical directories. Same thing for almost any other form of medical treatment.
But there are other uses of manipulation of data that actually enable you to affect real-world events, and this is where the state-on-state competition has actually begun to play out in its biggest role. You heard in Joanne's introduction that the central thesis of this book is that cyber has become the primary way that countries conduct short-of-war operations against each other, and the key words there are "short of war." Nobody's going to take on the United States or its allies—its biggest allies, anyway—directly in a military confrontation these days. People have a pretty good sense of how that would end. Kim Jong-un has his limits, the Iranians have their limits, and others.
But cyber gives them something a little more subtle to do this, which is by doing a data manipulation, by getting in to affect some real-world events, by releasing data that you are able to pull out in a surveillance way and then go use it for political purposes. You can actually accomplish your political or economic goals but calibrate the attack in a way that you're unlikely to get a military response. And that's what makes it the perfect weapon, not only that it's deniable, not only that it's hard to trace, not only that it's hard to find, not only that it's incredibly cheap to produce—nuclear weapons still take millions of dollars if not billions and big facilities and enrichment plants, and it's messy; a good cyberweapon, some teenagers, some 20-somethings, a little stolen code from the National Security Agency (NSA), and there's a lot of that floating around, and maybe a good box of Red Bull and you're good to go. So cost of entry is really cheap.
That's why we have begun to see states use this so frequently and to play with the calibration on the bet that the United States government or its allies will not be able to figure out whether this is an act of vandalism, an act of sabotage, or an act of war. And that's all still incredibly blurry and has been across the Bush administration, the Obama administration, and the Trump administration.
Where were the original sins here? How did this burst into use and get justified as a significant new weapon of choice for states around the country? Who was the first one to really make a big move here?
Well, the answer naturally is us, as it usually is, right? If there was sort of a Hiroshima and Nagasaki moment to the world of cyber, one that fortunately was a lot more subtle and did not fortunately result in the kind of death and destruction that we saw, it's Operation Olympic Games, which was a code name for the U.S. operation against Iran's nuclear enrichment plant.
This was a fascinating story to work on and a fascinating story to break when we first got going on it. We had reason to believe that this was underway. The first book I wrote 10 years ago was called The Inheritance: The World Obama Confronts and the Challenges to American Power. It was a book about what President Bush was leaving Obama. It came out just weeks before the Obama inauguration.
It opened with the Israeli effort to try to get President Bush to give them bunker-busting bombs to attack Iran's nuclear enrichment plants. Bush turned them down but did bring them in to a covert program to attack Iran's computer systems and try to get at their nuclear program. He did that because a group of generals and intelligence officials had come to him just about a year and a half before the end of his presidency and said, "Sir, we realize that you can't go out and publicly accuse Iran of having a nuclear weapons program because you did that with Iraq and it didn't work out so well. Instead, rather than go out and make the public case, here's another approach." And the approach basically was to get into the computer controllers that ran the Natanz nuclear enrichment plant, alter the data in those controllers so that they would speed up and slow down the centrifuges that spin and produce nuclear fuel.
Imagine these centrifuges. They spin at supersonic speeds. They're in these big silvery floor-to-ceiling devices, and if you slow them down or speed them up too rapidly, it's like a top that is suddenly spinning out of control. If any of you have seen the documentary from my last book Confront and Conceal: Obama's Secret Wars and Surprising Use of American Power—it's a documentary called Zero Days you can find on the Web at various places—it's got pretty good imagery of how this all played out. We hired a lot of artists to recreate how this took place. But basically, once they go out of control, they explode.
The brilliance of this as a weapon was that the Iranians spent two years trying to figure out what was going wrong. Did they have bad parts? Did they have bad engineers? Were their Occupational Health and Safety Administration (OSHA) regulations just not up to snuff? What was going on that was creating these unexplained explosions in the underground facility?
The answer came when the program went slightly awry, and the code itself got out, not because of any nefarious act, but because the Israelis had tweaked the code in a certain way in a dispute with the Americans about how to write this thing the best way, and they made an error in the course of pushing the software out too quickly. It happens to your computer, it happens to your iPhone, it happens to covert programs, as it turns out. A worker who had been at Natanz and had his computer plugged into the system, the code got onto his computer. He went home, hooked up to the Internet, and unbeknownst to him the code started spreading around the Internet, and suddenly in the summer of 2010 we started seeing tens of thousands and then hundreds of thousands of copies of this thing spread around the globe, and it was the actual source code that we were able to pull apart.
When we looked at that source code, what did you discover? That it only went into effect when it came into contact with systems that had 164 machines attached to them. That mystified a lot of people in the computer world, but if you had covered the Iranian nuclear program for a long time, as my colleague Bill Broad and I had, you knew that the Iranians organized their centrifuges in groups of 164. So it was pretty clear pretty fast what this was intended to go do, and the more we dug into it the more we understood it.
Then the question came: How did this program come to be? We went back and discovered the Situation Room meetings in which President Bush had authorized it. He actually sent the generals and intelligence agents off to Tennessee, where they borrowed a bunch of centrifuges that had been turned over to us by Muammar Qaddafi when he gave up his very nascent nuclear program. They checked out the centrifuges and told the people in Tennessee not to expect them back. They built a model of the Natanz facility in a hillside in Tennessee. They attacked it with the software. They discovered it worked. The centrifuges exploded.
They packed up the remnants of them, put them on an airplane, flew them to Washington, drove them into the Northwest Gate, and dumped them out on the conference table in the Situation Room, and invited President Bush to come down and take a look. He took one look at it and said—Joel would be upset if in the nice precincts of the Carnegie Council I told you what President Bush said, but let's just say it was an earthy Texas-ism that began with "Oh." Anyway, he looked at this and authorized the program to get going.
President Obama inherited it, and President Obama as he was beginning to approve these attacks—he had never really thought about offensive cyber before he became president—said to his staff, "You know, if we do this, it's fine, but just be aware that every other country in the world that is attacking us already or wants to attack us is going to use the fact that the United States is using this weapon to justify their own attacks." And, of course, that's exactly what has happened.
Since Olympic Games happened, since the Iranians figured out from that code what went on, Iran attacked Saudi Aramco, wiped out 30,000 computers just to make a point to the Saudis. Those of you in the financial industry will remember the Iranian attacks on Chase, Bank of America; a small dam that's up in Rye, New York, that I used to play on as a kid. I could have told the Iranians there's no water behind it, but they never asked.
Those are the Iranian attacks. They also went after—they really know how to hurt a country—the Sands Casino in Las Vegas. Can you imagine a nastier attack on the United States than to attack its casinos? They did that because Sheldon Adelson showed up one day at a university here in New York and on YouTube made the declaration that if you wanted to teach the Iranians a lesson, just explode a nuclear weapon in the desert and tell them that Tehran was next. So, they thought: Sheldon Adelson. Desert. Las Vegas. Casino. I bet we can get to him before he gets to us. And sure enough, his workers came into the Sands Casino one morning and discovered that all of their hard drives that run the casino had been wiped clean, whereupon they set about the emergency work of trying to get the casino running without any of us figuring out that they had been subject to a big cyberattack. I'm glad to say that failed. They did get the casino running; they didn't manage to hide it terribly well.
Then came in 2014 the North Korean attack on Sony, and this was a real seminal moment for the United States government because it's when the Obama administration came face-to-face with the fact that it really didn't know how to categorize these attacks. Some of you may remember that attack because it began with the production of a really terrible movie called The Interview. Anybody here seen it? [show of hands] Wow. Better yet, some of you are willing to admit to it. That's good.
For those of you who have not seen it, I can now save two hours of your life. You can go off and watch something else. But basically, the premise of this is that two journalists go off to interview Kim Jong-un. This was before he and the president of the United States fell in love with each other. They are sent off in the movie to actually go try to assassinate him.
I spent 36 years in newsrooms in the United States. If you were going to hire a group of people to be hit men, I can't think of a group that would be less qualified for the job or more likely to screw it up. But anyway, that's the movie.
The North Koreans got very upset. They did the natural thing you would do if you wanted to stop a movie from coming out in the United States: They wrote a letter to the secretary-general of the United Nations and asked him to block the movie. They soon discovered just how deeply influential the secretary-general is in Hollywood.
When that didn't work, they sent out a team—we now know the members of the team because they were indicted four years later by the Justice Department—to China and to Malaysia and Thailand, and they mounted a very careful, patient attack on the Sony computer systems. They spent months inside the system, mapping out just how the computers inside Sony are connected to the accounting system, the production system, the archive of movies. They got into the email system. They released some of the emails. This is how we discovered the earthshaking news that Angelina Jolie can be very difficult to work with on the set. But it was an interesting sort of foreshadowing of what the Russians later did in the 2016 election.
Then, just around Thanksgiving in 2014, people came into Sony, saw a gruesome picture that they had put up of the head of the president of Sony, another of our classmates. They showed a severed head and a long description of all the reasons that some group was attacking Sony. It was all a distraction to keep the workers who came in in the morning from realizing that their hard drives were being erased. The only people whose data was saved were the ones who reached behind the computer and did the high-tech thing of unplugging the computer. Seventy percent of Sony's computer systems were wiped out in about two-and-a-half minutes. They had to turn out their paychecks that week by going down to the basement and finding those old paycheck machines that we used to use in the 1970s.
This was fascinating. It was a political act—they were trying to stop a movie. The government did not know how to respond. The briefers who went down to see Obama had to explain that they were there to brief him on a really bad movie as the start of all of this.
In fact, Obama said, "How do you know it's a really bad movie?"
And the briefer said, "It's a James Franco movie, sir."
Then, Obama said, "Well, this just sounds like vandalism."
But he knew it was something more than that. In the end, he showed up in the press room just before he went on vacation to Hawaii one day in December and announced sanctions against the North Koreans that I doubt the North Koreans felt amid all the other sanctions we do against North Korea.
It wasn't very satisfying. And while the United States called out the North Koreans, they did not call out the attacks that were happening at the same time by the Russians against the State Department, the White House, and the Joint Chiefs of Staff, because they viewed these as classic espionage, not realizing that the GRU (Russia's military intelligence service) actually had a much bigger plan in mind.
Once Vladimir Putin saw we weren't going to defend the computing systems or make him pay a price for going into the computer systems at the White House or the State Department, he must have thought: Who cares about the Democratic National Committee (DNC)? It's run by a bunch of college kids. So, in fact, they spent about a year going into the DNC, two separate Russian intelligence agencies that didn't know that the other Russian intelligence agency was inside the same system, which means they operated with the same efficiency that, say, we do. One of those actually ended up releasing most of that data. It's a fascinating operation that you read about in the book, and we now know even more about it from one of the Mueller indictments of the GRU.
Meanwhile, the Internet Research Agency was off doing what it did at Facebook and so forth, which is partly cyber but partly just computer-enabled propaganda going on.
So where does this all leave us? We now know that we've got this range of vulnerabilities. We know the vulnerabilities are getting worse because right now we've got about 13 or 14 billion devices attached to the Internet, and by mid-2020, so maybe 18 months from now, that number's going to 20 billion devices attached to the Internet. It gives you a sense of what the growth is. That's the growth of the attack surface.
What politically have we done to deal with this issue? Well, treaties don't work. Treaties worked in the nuclear age as Carnegie and others have studied with such great care and helped begin to develop in the days when we began to think hard about arms control and mutually assured destruction and so forth. In that case, we had a small group of countries, and one in particular, that we were concerned about that were in complete control of the weapons. So you could do a treaty, get to lower numbers.
Cyberweapons are in the hands of states, terrorist groups, criminal groups, and teenagers. Most of those, especially the teenagers, don't sign treaties. So if you are going to worry about solving this, you've got to separate out what's just the criminal activity that you're going to prosecute criminally, and then what are the rules, the sort of guidelines, the sort of digital Geneva Convention you're going to create to protect civilians.
There's a lot of debate about whether a digital Geneva Convention could work or not. I find it the least bad idea of a series of bad ideas out there about how to deal with cyber, and the reason I think it has at least some legs to it is that while we all know the Geneva Convention itself is not fully enforceable—Bashar al-Assad wakes up every morning trying to think about how he's going to violate the Geneva Conventions that day and usually does a pretty good job of it—at least we understand that there is certain behavior that is outside the realm of what we consider to be acceptable.
So if we were to put together a list of what it is we would say should be off limits to a cyberattack, we could probably do it over this breakfast pretty quickly: the electric grid; specific communications facilities, emergency responders and so forth; places that house the most vulnerable—hospitals, nursing homes, things like that; election systems.
Anybody up there for protecting election systems? Ours are largely protected right now by the fact that they're so old and disconnected from the Internet and spread out over 50 states that it actually has created a natural barrier to hackers. But we can't rely on that fully.
What's the problem with this? Among those who would probably object vociferously to the creation of a digital Geneva Convention would be our own intelligence agencies. Take election systems off? "Well, we understand," I'm sure they'd say, "why you want to do that. But who would want to deprive the president of the United States of the ability to go in and, say, unseat the next Maduro if they thought that that would be an easier solution than going to war?"
Power grids? Dig into the book, and you'll discover the first big battle plan put together by U.S. Cyber Command. It's called Nitro Zeus, and it is the highly classified program that they developed if we went to war with Iran just prior to the 2015 nuclear agreement that President Obama and Secretary Kerry reached with the Iranians and which I understand has run into some kind of problem lately. But that plan basically called for unplugging all of Tehran—its electric grid, its communication systems—in the first 24 hours of conflict on the theory that if you actually unplug and completely paralyze a country you may not have to drop a single bomb. Would a lot of people die? Absolutely. Would fewer people die than if you had to bomb the entire city? Probably yes.
These have raised a whole range of new issues that we are not actually discussing, and we're not discussing it in part because the intelligence community and the military have wrapped so much secrecy around cyber that we are not having the kind of debate that we had about how to use nuclear weapons. In the nuclear age, everything about how you build nuclear weapons, where you store them, who has the authority to launch them, that was all classified, but we had a rip-roaring debate about how to use them, and it ended up in a completely different place than it started.
MacArthur wanted to drop the bomb on the North Koreans and the Chinese. We now know that General Westmoreland wanted nuclear weapons in case he needed them in Vietnam. We had other cases, including during the Cuban Missile Crisis, where some argued for using nuclear weapons. But by the 1980s we had had enough public debate that we got to the thought that we would only use nuclear weapons as a matter of national survival.
Cyberweapons we're using every day, and others are as well, and we're not having that debate because nobody wants to reveal what our cyber capabilities are, which are quite impressive and quite good, but there's this natural instinct to never discuss it. So to this day, the U.S. government does not acknowledge Olympic Games. In fact, I went through the pleasures of a four-year-long FBI leak investigation that came out of the Olympic Games stories, a really pleasant process to be fair. I highly recommend it.
That's part of the difficulty, and it's part of the main reason that I wrote The Perfect Weapon, to try to get all of us—and particularly places like Carnegie, which is already engaged in this issues so well—to think anew about how it is that we want to go manage this technology that has gone from a happy world of putting communications together among various people to the predictable unhappy effects of what happens when states begin to adopt it as a weapon.
So I'll leave it at that. I'd be happy to take your questions. I thank you again so much for having me here today and to all of you for coming out.
QUESTION: Thank you. I'm James Starkman.
Is there any kind of dome such as covers much of Israel today that could be deployed as a protective weapon by the United States and its allies against cyberattacks?
DAVID SANGER: People have tended frequently to go to missile and nuclear analogies because all the threats that we face in the cyber world sound a lot like the threats that we faced in the Nuclear Age. The problem is that all the solutions to it that we found in the Nuclear Age don't apply here.
First of all, you need that intercourse back and forth in the Internet. You could try to filter everything that's coming in and out of the United States because it comes in and out of the United States on large fiberoptic cables, which are another potential threat to us, because if you cut a shockingly small number of those cables you can black us out as a country entirely and bring the economy to a pretty big halt, which is why we get worried when we see these Russian submarines floating around with basically a giant scissors in front as they scope out where we've laid all these cables. But there are a lot of problems in doing that.
First of all, if you have the U.S. government sitting on every one of those cables, you have at least the potential that the U.S. government is surveilling what's happening in the domestic and foreign communications of U.S. citizens moving across those cables. It's not as if it all separates out between what would be malware running on one line and your email back to see how mom's doing on the other.
The second problem you have with it is that by the time it's coming through the cable it's too late. If you read the U.S. Cyber Command's mission statements developed just about a year ago, they talk about "defending forward." It's a concept that really comes out of Special Forces, where we learned that if you're going to stop a bomb-maker from coming in and blowing up Times Square, you better stop them at the bomb-making factory in somebody's living room in Pakistan, which is why we send Special Forces out to do that.
So the theory here was, look for the malware as it's being developed in a developer's computers somewhere abroad and deal with it there. Sounds great.
Then, think about what you do in real life when you discover it coming together. You see a group of Chinese hackers getting ready to do the next attack on the Office of Personnel Management (OPM), which is where the Chinese got those 22 million security files, which for a year undetected they pulled out of OPM and sent back to Beijing. You see that, and you say, "Okay, we're going to fix those guys," and you go in and you fry their computers the way North Korea fried Sony's. Then, Xi Jinping is on the phone to the president and says, "You know, sir, you just shut down the computer system where we're developing educational software for K-3 students," and your ability to prove that what you were going after was something a whole lot more nefarious is about zero, particularly to prove it in public.
So this is a really hard area of an old problem, preemption. So, no, there is no overall dome that you can go do. It runs afoul of our legal concerns and our technological concerns.
QUESTION: John Hirsch. First of all, thank you very much.
I want to link what you're saying to the official emergency of the day, which we are told is this humanitarian crisis on our border with Mexico. First of all, is that an issue in reality, or is that just a fantasy of the president?
Second, how does the president's focus on this so-called "humanitarian crisis" relate to the effort, such as it may be, of the intelligence community of the United States to deal with the issues that you're raising?
DAVID SANGER: Well, it's all a question of where you put your priorities. The president, for obvious political reasons, doesn't follow the ranking that you see in that director of national intelligence assessment to Congress.
The next assessment is due next month because it comes out in February of every year. So, pity poor Dan Coats, who has to sit here with his ranking—cyber, terrorism, nuclear proliferation, rise of China, rise of Russia. In last year's report, you get to the bottom of page 14 before you get to the first issue that vaguely goes across the border, and it had to do with transnational shipment of drugs and particularly the opioid crisis in the United States, a very serious crisis. Most of those drugs are coming through normal entry points; they're not coming over the wall.
But this is the problem when you try to take a sort of rational explanation of the threats and try to apply it as a matter of policy to a political environment. The president didn't campaign on "I'm going to build us the biggest Internet filter on Earth." He said, "I'm going to build us the longest wall on Earth." And that's part of the difficulty there.
We just had a story on this in yesterday's Times. You ask the intelligence professions like Nick Rasmussen, who used to run the National Counterterrorism Center until a year ago, or others of his colleagues where does this rank, and his answer is, "Pretty low on my list of all other things."
Then you can ask, what did the Trump administration do to deal with cyber? They actually started off pretty well. They had a homeland security advisor in Tom Bossert, who understood the topic matter and dealt with cyber issues a lot during the Bush administration and spent some more time working on them in the private sector during the Obama years and came back in as homeland security advisor. I didn't agree on every analysis or problem he had, but he was certainly very capable of dealing with it.
Then they had a cyber coordinator at the White House, a job that had started at the very end of the Bush administration and grew in power in the Obama administration, and they picked for that a guy you have probably never heard of named Rob Joyce, but very well-known in the intelligence community because he had run the Tailored Access Operations (TAO) unit of the NSA. That's the sort of Special Forces of the NSA that breaks into foreign computer systems and executes operations like Olympic Games.
Just like a bank wants to hire an experienced bank robber, if you want to figure out how you're going to defend your systems, you want to take somebody who has devoted his life to breaking into the most secure systems around the world elsewhere because you're not likely to take a lot of nonsense from people who tell you, "Oh, my system's secure."
So what happened to those? John Bolton came in, and the first thing he did was get rid of Bossert within 48 hours, in part because he had a direct line reporting to the president, and I think Mr. Bolton had a different concept of how he wanted to organize the national security organization in which only he had that direct line. The cybersecurity coordinator's job? Eliminated. Not just that he got rid of the person, he got rid of the office.
So this concept of where offense and defense—which is blurry at best in cyber anyway—gets put together at the White House? It's not getting put together at the White House. So that's about where we stand.
QUESTION: Thanks very much, David, for a very thoughtful and somewhat frightening analysis of this threat.
You've identified ways that can't be used to in effect stop it or preempt it—forward defense, forward deployment. The question I have really relates to resilience. In other words, let's assume that we really can't stop it for all the reasons that you've indicated and that the threats are increasing and the vulnerabilities are increasing as more and more devices, the Internet of Things, just proliferates. Then the question becomes: What is our ability to respond afterward? How resilient are we in putting things back together—as after 9/11, for instance, only this is far more complicated? What are we doing and how do we do it?
DAVID SANGER: It's a great question, Bob. Resilience is a big factor in how you do cyber, particularly in certain industries.
In the financial industry it has actually become a selling point. People go out and say to their institutional buyers: "Here are six different ways that we back up all of your data in real time and do it offline so that if one part or two parts or three parts are attacked we still understand this."
In fact, your old job—Goldman Sachs—hired a guy who's profiled in the book who used to run the Department of Homeland Security's (DHS) resilience operation and is now doing the same thing at Goldman, and you see that frequently throughout the financial industry. One of the reasons it's so hard for Cyber Command and DHS to hold onto really good talent is because the financial industry and others are coming in and offering them 10 times their previous salary if they would come do the same thing for them. That's good because resilience is something that we need, and it particularly applies in the utility industry, in communications, and in finance. Those are all areas where you want to have that resilience together.
It doesn't apply quite as much from the damage that gets done from losing your data because it was classified or sensitive. So what happened to OPM when the Chinese decided that they would build this giant database of who works on what in the United States government and started by stealing all of the security clearances for military and for all senior officials, was at the same time they were doing an attack on the Anthem insurance business, which gives them another large set of data that you can use big data to go at, and we now know Marriott, which also appears to have been a Chinese operation, at the same time, you can begin to get a sense of what they're doing.
Resilience doesn't help you in recovering critical information as to how your operations go and work. It also doesn't help you if you don't know your data has been manipulated. So if you're the Iranians and you're watching those centrifuges explode, it may not help you very much in understanding what the cause is before you switch over to a different system.
Now we have seen cases where that has happened. Bill Broad, my colleague at the Times, and I reported on a U.S. operation against North Korea's missile program, and for a good number of years a lot of North Korean missiles you'll remember were falling into the sea. Kim Jong-un eventually figured out what was going on, and he switched to a different missile technology, which was his form of resilience.
QUESTION: Peter Russell.
You've spoken a little bit about the response of the private sector kind of case by case and institution by institution. Is there a wider role and scope for the private sector to really contribute to some resolution? I want to use one example. In the paper this week there was a report of a big international food company making a claim on a Swiss insurance company for about $100 million for the impact of cyberhacking. This was Mondelez and Zurich Insurance. The insurance company apparently reportedly turned it down and claimed that this was an act of war and that it doesn't cover it.
DAVID SANGER: You've raised two different issues. First, is there more that industry can do? A lot of the industry groups—financial industry, utilities, others—have formed these fairly formal organizations in which they do a lot of sharing of intelligence now in real time—it used to take weeks—so that if they see code that's coming in to Chase, it suddenly becomes available to Wells Fargo and so forth. That's sort of taking on what used to be considered a semi-government kind of function, and it makes a fair bit of sense.
These are supposed to also interact with the intelligence community and with the FBI and others. The problem is that the U.S. government has taken the intelligence it gets, much of which is as this stuff is being bred abroad or domestically, and the first thing it does in its natural wisdom is classify it, which makes it difficult to share with all of these groups.
Our classification system is built on an old way of thinking about secrets that have a long longevity and that you have on paper. The data that you get for a cyberattack probably is useful in an intelligence way for a week, two weeks, maybe a month, and you've got to spread it out quickly if you're going to actually counter it. So the minute that you put a 25-year classification stamp on it, you have immediately stepped on your own ability to deal with this. And yet nobody wants to not classify this data because they say, "Well, the Russians will learn what we know about their system." So there's sort of an old think that we're still fighting in how you go about the sharing of information.
Insurance issues. Cyber insurance is the hot thing in the insurance world. Whenever I do this presentation to corporate groups and so forth, the people who are involved in the insurance, or at least the senior executives—I did this out at Microsoft not long ago with their CEO conference—will come up to me, and they will usually say something along the lines of: "Before I heard your talk, I didn't think that we understood the risk of what it is we're insuring, and now that I've heard the talk I know we don't understand the risk of what we're insuring."
They don't. First of all, if you're attacked by a state but we're not at war with the state, it doesn't sound to me necessarily like an act of war. You're back in Obama's thing, right? Was this actually just sabotage? Was it some kind of drive-by shooting in a cyber analogy, or truly an act of war?
Well, of course the insurance company lawyers are going to go to the most extreme thing that's not covered, which makes you wonder why are you buying this insurance to begin with. If every cyberattack is an act of war, then don't bother with it. We're no place close to getting our definitions together as a government, and until we are I can't imagine how the insurance side of this is even going to work.
QUESTION: Hi, Michael Kaufman.
I'd be curious about your recommendations for the policy that we should pursue given that the two things that have been in the news late last year and early this year is that, one, servers manufactured in China were pre-planted with bugs and purportedly installed in Amazon and other cloud-based systems, and also the race for 5G where the Chinese have been subsidizing their manufacture and sale of modern technology into the telecommunications systems around the world.
DAVID SANGER: Very good question. The essence of this problem is the Internet was developed as a global good to spread communications. It wasn't built with security in mind, just as roads weren't initially built with safety in mind; somebody just laid out a dirt road where a path used to be, and later on we tried to add on good lighting and a breakdown lane and good gentle exit ramps and so forth. It took us about a hundred years to patch that over to something that only kills 60,000 people a year.
So on the Internet that's basically what we're trying to do. We're trying to take a system that wasn't built for security and layer enough stuff on it to make it secure but layer it on without slowing down traffic that's getting faster and faster all the time.
5G gives us the opportunity to actually make a big leap where you're moving people to a new network that can be built for security, and overall I think we're probably going to be more secure once 5G steps in and becomes widespread than we are with what we've got now, no matter who's building it.
5G, for those of you who haven't been following this, is more than just another little notation you're going to see on your phone. It's a very different way of designing cellular technology so that the speed of what you're getting on wireless is essentially the same or better than what you're getting at home right now if you've got a really high Internet speed. You will move basically to entirely a high-speed-Wi-Fi-doesn't-make-any-difference-where-you-are system, except of course if you're off in a rural area that's not served. But if you're at 64th and Park you probably ought to do pretty well. So I think there are great opportunities there.
The Chinese are building a lot of 5G equipment. It shouldn't be a shock that the world is not just sitting back and saying, "No, no, no, we only want to buy American."
The Chinese are as convinced that our systems are full of bugs as we are convinced that Huawei's are full of bugs. As anybody who has spent time reading the Snowden documents would discover, they're right. We spend a huge amount of time talking in public about the fear that Huawei's systems put in the United States would be filled with bugs that would enable the Chinese to get the data, and while we haven't found any examples of that, I'm sure it has passed through the minds of Chinese intelligence to do that.
What we don't say is that we broke into Huawei as we discovered from documents in the Snowden probe—they're described in the book at some length—downloaded the designs for their systems so that we would understand when they sold their system to Venezuela or North Korea or whatever how we could get inside it and do exactly what we're afraid Huawei would do to us. So when we get preachy about this, I have a little bit of a hard time with it.
We're not going to stop the Chinese from selling their equipment abroad. We may stop them from selling it to the Canadians and the Brits, which is the current argument. The Australians have already banned them.
But the fact of the matter is, these networks only work if we're all connected. So if Canada does go ahead and build a Huawei-built 5G system, you might as well put them in Tulsa because we're running stuff all the time between Canada and the United States. This is one area where actually building a wall doesn't actually make a whole lot of sense.
QUESTION: Anthony Faillace.
You referenced John Bolton firing the coordinator in this area. That obviously seems suboptimal. What are the broad brushstrokes of an architecture that the government could put in place to coordinate the different efforts in the government as well as with the private sector?
DAVID SANGER: That's a very good question. There have been various efforts to do this. The Obama administration turned down a national cyber strategy, the Bush administration did, and the Trump administration has recently. Part of the difficulty is a national security structure that views what's happening inside the United States, where you've got certain protections on your data and your communications, in a way that's different from the way we deal with the rest of the world. It's the Department of Homeland Security that is primarily responsible for the defense domestically.
I don't know if that makes you feel better or worse, but as I pointed out in the paper yesterday the price of the furlough to deal with the national security threat that is on page 14 is that they have furloughed 45 percent of the cyber workforce at the Department of Homeland Security, which is defending against the number-one threat on the list. When I called the White House for comment on this, it will not surprise you that they didn't return the call. So we've got our priorities completely backward, even if you take the existing structure.
The way we then decided to do this is that U.S. Cyber Command protects the structure of the U.S. Defense Department and others except for the intelligence agencies, which don't trust Cyber Command because they're part of the military and have designed their own, and they've designed their own, largely through cloud services provided by companies because the U.S. government doesn't build cloud services. So the CIA's cloud service and security is done by Amazon, Amazon Web Services. They bought it on Prime. They got a really great deal. That's that end.
Then, the question is, if you're a defense contractor, are you working through the DHS system or through the military system? The military says if you're a defense contractor, you're in our system, except that a lot of the technologies that we're most concerned about aren't coming from traditional defense contractors. They're actually being developed by start-ups in Silicon Valley, who are obviously at the cutting edge of a lot of this stuff. You don't go to Boeing or Raytheon necessarily for the most cutting-edge new concepts in cybersecurity, and those firms aren't covered that way.
It strikes me that this has all got to get rationalized in a much larger context, and that argument isn't happening in part I think because so many people in Congress find the topic to be confusing, technological, and just too hard. There are four or five members of Congress who I talk to a lot—Senator Warner is one of them—but there are many others who have studied this and spent a lot of time on it. Mostly they're on the Intelligence Committee, some are on the Armed Services Committee, a few are on the Financial Services Committees of those overseeing the electric grid issues.
But by and large, the level of ignorance is not only high, but people kind of boast about it. Think about the number of politicians you hear say—I've heard this so many times, and I cringe every time I hear it: "Well, the cyber stuff, it's so confusing. When I need good advice on it, I go to my 14-year-old daughter, and she explains it to me."
I just want to grab these people and shake them and say, "If you were sitting in Congress in 1959, would it have been acceptable to say, 'Oh, this nuclear weapons stuff, it's got nuclear physics in it, it's so hard I go to my son who has been taking nuclear physics in high school, and I ask him to explain it to me.'" I don't think that would have been politically acceptable then, and I don't think the degree to which it still seems politically palatable to pretend that you're just ignorant here or worse yet, actually be ignorant here, is much of an excuse.
JOANNE MYERS: As you can see, we have many questions and many concerns, so I thank you for writing The Perfect Weapon. And it is available for you to purchase. Thank you.
DAVID SANGER: Thank you so much.