Carnegie New Leaders Podcast: Cybersecurity, Norms, & Regulations, with Jason Healey
February 27, 2020
ALICIA FAWCETT: Welcome to the Carnegie New Leaders podcast, a series where members of the CNL program identify leaders in their field and ask these experts critical questions about ethics and leadership. Our aim is to help listeners understand why ethics matter in everything from academic research to technology development and international affairs.
My name is Alicia Fawcett. I am a member of the Carnegie New Leaders Program and a part of the U.S. Cybersecurity National Moonshot Committee in Washington, DC. My work in cyber-risk is focused on analyzing emerging cybersecurity policies and infrastructure in order to protect organizations from cyber-risks, financial loss, and data loss. All views are my own and do not represent any external affiliations.
Let's take it over to Professor Jason Healey. First of all, I am overly excited, starstruck, and especially honored to be here today with the king of cybersecurity policy, Professor Jason Healey. Jason is a professor at Columbia University, and director at the School for International and Public Affairs (SIPA) Project on Future Cyber Risk and senior research scholar and founding director at the Atlantic Council's Cyber Statecraft Initiative.
That's only a small portion of his efforts in saving the world from cyberwarfare. His unique approach in marrying emerging policy issues at the intersection of technology and policy is timelier and more important than ever. I'm glad to be here with you, Professor Healey.
JASON HEALEY: Thank you.
ALICIA FAWCETT: You're welcome. Let's get started. We're just going to go through a few questions. Why not start off with cybernorms and security?
The first question I have for you is, how has cyberconflict developed over the past few decades?
JASON HEALEY: There have been two really important trends. The first is on the audacity and scale of the operations, and the other is on our deepening dependence. You don't want those two to go together. It's okay if you're having more and more vigorous operations in cyberspace if they're going to have less impact. But of course, that's not the case.
I did the first cyberconflict history book, and quite clearly the first cyberconflict that's identifiable as such was in 1986. German hackers were trying to steal classified information from the U.S. Department of Defense and selling it to the Soviet KGB. So it wasn't conflict in the sense of warfare, but it was certainly conflict and competition and contest of the attacker and defender going against each other.
Of course, in the 1980s, 1990s, and even to some degree in the early 2000s, cybercompetition and conflict was largely about espionage and relatively small-scale disruptions. Now you have far more important national security-scale incidents, starting with the attacks on Estonia in 2007. Again, you have that intensification of the conflict—more states getting involved in the game and saying: "Hey, this is for me. I can have a strong national security impact with little cost"—at the same time when we are more digitally dependent than ever.
ALICIA FAWCETT: Very interesting. How would you define what a cybernorm would be, something that all states share?
JASON HEALEY: Norms cover a lot of different areas in what we're trying to get done. There is such a wide range. First I want to say that very recently I saw the chief of naval operations—the top Navy officer in the United States—say: "There are no norms in cyberspace. There are no international norms." And it's just wrong. I think there is a lot that we have to do, especially people who believe in a better world—not that he doesn't—that we need to fight that because there are all sorts of norms. Some might be weak. Some might be more ignored than paid attention to. And in some places they're lacking.
Broadly for now we can say norms are the kinds of behaviors that will be stabilizing. Those are the positive norms. We have lots of negative norms of things that states have decided that they're going to ignore, that they're going to say: "You know what? This wasn't so bad. I'll choose to look the other way."
ALICIA FAWCETT: What would be an example of a positive or a negative norm?
JASON HEALEY: Certainly states have decided that when they're subject to a cyberattack, most of the time they're not going to do all that much about it. We have done that in the United States, and many people say: "This is ridiculous. Look at what the Chinese are getting away with when they steal." But the United States has been the beneficiary of that more than perhaps any other country, as we saw after the Snowden revelations—widespread cyberespionage—and most states have said that in general cyberespionage is the thing that's going to be okay. The United States has said not for commercial purposes. That's not okay. But that's not really a norm; most other countries haven't agreed to that.
This gets to a second point. When people say there are no international norms, the norm against commercial espionage is definitely one of those where there is a norm. China and the United States agreed face-to-face between the presidents that there would be no cyberespionage for profit. You might steal commercial secrets, but you don't give it to your own companies to make money. That norm was subsequently agreed to by the G20. It's a pretty strong norm in the sense of having gotten agreement. Of course, we can say maybe the Chinese aren't living up to that. That's different from not having a norm; that's saying it's a weak norm or it's not a reinforced norm.
ALICIA FAWCETT: Would you consider some of these norms—especially with the Chinese—codifying or binding? Are there ways that we can further make countries take responsibility for their actions?
JASON HEALEY: Something that I think has been very important, and I think the Trump administration does get credit for—I suspect any administration would have done this, but the Trump administration has done this right, starting with Tom Bossert and Rob Joyce when they were in the White House—really doing much better saying, "This particular incident was this country, and to some degree here's why we think that's not okay."
It started with North Korea and the WannaCry attacks, which was a set of large-scale cyberattacks that looked like ransomware, but they were really just disruptive attacks. Russia, not long after, did some attacks called NotPetya that really splashed into a massive global incident, one of the worst that we have ever seen, and the White House was good I think about calling out North Korea for that, calling out Russia for that, and saying exactly why these were wrong. They didn't say why it was wrong in international law. That would have been an even better position.
Only recently, in February 2020, the United States, Great Britain, and a number of other countries called out some disruptive attacks by Russia against the state of Georgia; that happened in October. That I think is helping to say what it is about these. It's not only a single country, but it's different countries coming together to say, "Look, this is unacceptable, and here's why," because we're not really seeing much progress in the United Nations.
ALICIA FAWCETT: That goes to my next question, the effort made in the international community. You said that the United Nations has not really been making any progress since 2015. What other international bodies do you believe would progress cybernorms?
JASON HEALEY: I don't want to be unfair to the United Nations. There have been at least three different things that have been happening in the United Nations, and there has been some progress that I'm heartened by, but not if you're really looking for things that are codified in a way that you're seeing in other fields.
Back to the 1990s there was action in the General Assembly, especially with the Russians, of saying, "We need to limit cyberweapons and information weapons." The United States and others have pushed against that, suspecting—rightly, I think—that the Russians were more interested in limiting it in the United States while they were trying to strengthen their own borders against news that they didn't want. Such efforts were probably more about controlling hostile information that would work against an authoritarian regime rather than actually helping citizens or protecting critical infrastructure. And those have been on and off over the years.
Then there have been smaller Groups of Government Experts meetings for 10 years or more now—15–30 countries—and they have had mixed progress. Sometimes they don't get to even submit a document on what they have said that they can agree to. Other times they agree to good norms like "Don't attack critical infrastructure in peacetime," that people are relying on. So they have had good progress, but it hasn't really solidified into agreement between the sides. You have a Western side and you have a Russia/China side, and we haven't done great at being able to cross that gap.
Currently in the United Nations there is also the Open-Ended Working Group, "open-ended" meaning that any nations can join it. Whereas the Group of Government Experts was deep conversation with a small group, the Open-Ended Working Groups I think are doing a nice job of bringing many Member States into the tent as well as civil society so that many people can be heard and listen.
ALICIA FAWCETT: What other institutions are working together in terms of protecting financial stability?
JASON HEALEY: I'm really glad you brought that up because a lot of times when we talk about norms, when you have admirals, senior policymakers, or international relations experts saying there are no norms in cyberspace, they tend to be thinking about a particular kind of norm. They're thinking, Is there a Geneva Convention for cyberspace?
There are a number of problems with that. One, we may not need a Geneva Convention for cyberspace because we have the Geneva Convention. We don't have a Geneva Convention for air warfare as much as we say, "Well, no, we have the Geneva Convention, and we can argue about how it might apply to aviation and to bombardment from the air," but it's a pretty good spot to start the conversation. You can't bomb a hospital from the air just like you can't attack it with artillery. You don't need a new treaty to tell you that.
So I was a little disappointed that we haven't called that out, because when a country like China says, "Nothing in international law applies; you have to start from scratch," clearly if they are saying that the United States could attack a hospital in China and that wouldn't violate international law or that Russia could do that to a hospital in Ukraine and just shrug and say, "Well, it's all new," no one is going to find that acceptable. So we do have all sorts of great norms.
Another one I will point out, and then I want to get to private sector because that's what you asked, to me the most important norm and the one that we really have to reinforce is that states have decided not to cross the line into death and destruction, certainly not outside of a war zone. Russia and Ukraine, they've shut off the lights a couple of times, and to me that's the most important norm. We might call that deterrence, we might call it restraint, but either way we need to recognize it as a fact, and we need to embrace that. We need to blow on those embers to say: "Good. Here's something we agree on. Let's make sure that we can keep this stabilized."
If we say there are no norms, then there's no restraint on what the United States might want to do, or Germany, or China, or Russia. We can say, "Eh, there are no rules to the game," so if we show restraint with U.S. Cyber Command, if I personally argue for cyber-restraint, then I must be naïve or a sucker because no one else is showing any restraint either. That's why I think it's a very dangerous point.
You asked about the private sector. I specifically wanted to call out—we can get to the financial sector maybe in a second, but one of the earliest norms is about the private sector. When there is a significant incident, it's an exceptionally strong norm that goes back to the 1980s that techies help out other techies. You see these incidents even back to 1986. There was a worm called the Morris worm that was taking down the Internet. How did it get fixed? Techies came together and said: "We need to keep the network up. We need to come together." That's an important norm.
This 1986 cyberconflict I mentioned where German hackers were hitting into the United States, the person at the center of that—who is actually an astronomer—would call back to the Internet service providers and say: "Can you help me out? Someone is hacking me through you."
"Oh, my gosh. That's terrible. What can I do? All right, I'll give you a hand."
That self-help norm among technologists is such a wonderful norm, and we need to be keeping these things in mind when we talk about the topic.
ALICIA FAWCETT: Interesting. Stemming off that, the financial system.
JASON HEALEY: Thanks for reminding me.
I had been the vice chairman of a group called the Financial Services Information Sharing and Analysis Center (FS-ISAC). It was created in 1998 so that if one bank got hacked, it could share information about that to the other banks. There has been that, but to me it has been almost more value added when there has been a big incident because the norm is that, when you are hit, the banks come together. When Iran was doing denial-of-service attacks against the banks in 2011–2012, the banks would come together, and if you were a bank that had not been hit before—maybe you're not a big New York bank that has a big cybersecurity team—that bank would probably have a bad first day, but by the end of the first day the FS-ISAC and the other banks would come in and say: "Sorry. We see what's happening to you. Here's what you should do about it based on our lessons." And they wouldn't have a bad second day.
That's a really important norm of self-help that we have happening here. Now the FS-ISAC is working in Europe and Asia, so it's not just America. It is an international norm. There is great work happening at the Financial Stability Board, which is an arm of the G20. They have set up new groups like the Financial Services Roundtable and FISIC [phonetic], all these governance and cooperation councils that are happening within finance because those global norms of cooperation really are strong.
ALICIA FAWCETT: Going off the private sector as well, there are a lot of private-sector companies—to name one, Microsoft—that are doing their own propagation for cybernorms and especially working in countries like China, for example, that would be risky in terms of losing data and whatnot. They recently pushed for the Digital Geneva Convention. What role do individual companies play in crafting cyberspace norms?
JASON HEALEY: It's a great point. It throws off a lot of my government colleagues. I love it. But basically you have Microsoft and other companies that are doing diplomacy, saying: "Here are the international norms. Here are the rules that companies should abide by, and here are the rules that governments ought to abide by, and let's continue to talk about this."
Microsoft has led the charge in saying that companies in whatever country should not be involved in attacks against civilians. If you get a warrant for the government looking after cybercriminals, of course you do that. But if they're saying, "Provide this backdoor" or "Help us develop this capability to attack our rivals," information technology companies shouldn't be in that business. If the U.S. government wants to take Microsoft's products and find zero-days and turn them into weapons, Microsoft is going to be pretty upset about that, but certainly the government can't turn to Microsoft and say, "Help us do that."
Brad Smith at Microsoft and others have really been pushing this. There is now a CyberPeace Institute in Geneva. I like this very much. Certainly there is going to be overreach. There is going to be confusion. There is going to be overstepping or understepping, but governments have had decades to work on this. A lot of times I hear governments say, "Well, companies are fundamentally conflicted because at the end of the day they just want profit. They just want money for their shareholders. Therefore, we governments, our values are more pure because we're in this for our citizens." I especially hear that from the military. I'm a veteran, and I hear this a lot within the military: "Well, we do this for our country, whereas they're just doing it for profit."
But governments are doing this for their own national interests, and those national interests aren't always pushing cybersecurity as the top value. If there was a switch on the president's desk or a prime minister's desk in whatever country and the president could turn that switch and it would make the Internet absolutely secure—it would be fully private, and you could not conduct attacks on each other—I know what Brad Smith and Microsoft would want. I know what Google would want. I know what the companies in Silicon Valley would want and what their preference would be. I don't know what the National Security Council would recommend to the president, because if he touches that switch, yes, the United States would be secure, but it would also mean the stop of a significant amount of U.S. espionage and our ability to hold adversaries at risk with cyberweapons. So the U.S. government is just as conflicted if not more than the private sector.
ALICIA FAWCETT: To go off of diplomacy, traditionally we have conflicts that have been solved physically with diplomacy. Is there space in the future to use cyberdiplomacy as a way to solve conflicts between nation-states?
JASON HEALEY: It's a great question. One of the major issues right now in international relations as it's looking to cyber is, are the use of cybercapabilities escalatory or de-escalatory? Is this going to help peace overall or not?
The argument for peace, that this is actually going to be helping, starts with the facts. We are decades into this, and yet states have not yet crossed the line into responding to a cyberattack with kinetic. In fact, you see the opposite happening. When President Trump felt the United States had to respond against Iran for planting mines on tankers in the Gulf and shooting down a U.S. spy drone, he canceled the airstrike because that might kill too many Iranians, but he went ahead with a cyberstrike.
This has led a lot of theorists to point out that cyber tends to work as a pressure-release valve, that it allows national leaders to act out but in a way that is relatively reversible and doesn't cause casualties. That's Brandon Valeriano, Josh Rovner, and other academics, and I think they have a great, great case on that. Josh Rovner in particular says: "Look, this tends to be seen as an intelligence contest, and intelligence contests are less escalatory than military contests, although they can escalate, and so as long as states are seeing it like that, this actually helps peace because it's giving this pressure-release valve that doesn't actually kill people."
The opposite argument is that pressure is building within cyberspace. As things get worse and worse, as more countries are getting capability, there is more dry tinder, and things might get out of control. Cyber causes a geopolitical crisis because maybe one side miscalculates or makes a mistake.
It also might get bad—if you think that there's going to be a war on Saturday, cyber is seen to be a great first-strike weapon. In fact, you've got to use it early before the other side can patch themselves. So you want to get in and get your disruption done. If the war is going to be Saturday, you need to use your cyber—and also space; you will want to attack his space assets—on Friday: take down his weapon, disrupt his logistics, and blind his intelligence if you can. But if I know you're going to do that to me on Friday, then I want to do it to you on Thursday. So you can see you end up with a "use or lose," who is going to draw first, and you might end up causing the geopolitical crisis or even the war because each side is feeling that now that we're in an acute crisis we have to use this and we have to use it early.
ALICIA FAWCETT: It sounds like a tit-for-tat situation.
JASON HEALEY: The tit-for-tat is more the first one, where we keep trying—
ALICIA FAWCETT: Back and forth.
JASON HEALEY: One of my colleagues, Stewart Baker, has said, "We need to be proportional plus a little bit." Stewart is a little bloodthirsty. Certainly if each side is doing that, then you're going to end up in a spiral. I work with Bob Jurewicz, who talks a lot about systems theory—that's positive feedback, that each thing is putting us farther away from the previous norm or the previous mean. What you want is negative feedback, like a thermostat, where each new input is bringing us back toward stability.
ALICIA FAWCETT: To move into the financial space, what are some of the differences between cyber and financial shocks?
Also, putting two questions together, are there any benefits of developing post-quantum encryption right now?
JASON HEALEY: We have been working at Columbia University on cyber-risks to financial stability. A lot of work that has been done on this—as you know from your work in cyber-risk—tends to be looking at an individual enterprise: What happens to cybersecurity for a specific bank? My colleague, Patricia Mosser at Columbia, runs our Central Bank initiative. She's working with the G20, the Financial Stability Board, with the New York Fed, and with the Bank of England, people who don't understand cybersecurity, but boy, they really focus on financial stability, on making sure that from whatever direction the shock comes, the financial system is going to be whole and running.
We have really been trying to come together to say, "All right. What is different about cyber than a normal financial shock?" Number one is that you have a sentient adversary who can decide the time and place of their choosing. Normally for a financial shock you don't have that, you just have people making their own individual decisions to sell and get out of markets, to take your money out of the bank, that then leads to a run on the bank. It's the collective decisions of individuals rather than a single entity that is deciding that they're going to strike at a time and place where you're most vulnerable.
That leads us into quantum. Both quantum computing—and quantum cryptography, which we will put aside for right now because there is some promise there and a bit of a race to see which comes first. Quantum computers, for reasons of math, can defeat almost all modern encryption almost trivially. It is certainly on everyone's mind, but fortunately we have been seeing this come for years now, and so quite a bit of work has gone into cryptography that is going to be more resistant and resilient.
ALICIA FAWCETT: What is cybersecurity versus cyber-resilience? Are there any benefits or attainability of resilience?
JASON HEALEY: It's a good question, thank you.
Resilience is looking more at the system. When we think of cybersecurity, that could mean almost anything to anyone. I say I'm a cybersecurity expert, and I mean I'm looking at cyber-risk and cyberwarfare, and of course people say, "Well, how do I protect my password? What can I do better?" because it covers just a vast scale. That's why I like talking about resilience because you are talking about the stability of a system and what it is that can help that system to survive shocks and hopefully be even stronger afterward.
To really tie this back to norms, since literally the 1970s the attacker has had the advantage when it comes to attacking on the Internet. I have quotes back to 1979 that say that the attacker has the advantage. And if the attacker has the advantage, then it doesn't help for system stability. If the attacker has the advantage, then there are all sorts of incentives to go first rather than defend yourself, always be on your back foot.
At some point, if the attacker has the advantage, decade after decade, you can imagine that there might be some tipping point where the attacker doesn't just have the advantage, they've got supremacy. There ends up being more predators than prey. That's why a lot of my strategies are focused on: "Let's get defense better than offense." Let's even imagine that that's possible as a goal and build toward that, because if we are trying to make that the norm, if we're trying to make that the core of our strategy rather than saying, "The best defense is a good offense"—where is there the United States is right now—let's say, "No, let's really work on that defense and that system resilience."
If every system or most of the systems that are part of the larger system of cyberspace have better defenses and are more defense-advantaged, then that induces a lot of stability and resilience. It becomes much harder to knock it over or knock over different parts so that it will succumb. That's why I think that is one of the most important things that we can do as a strategy and as a norm.
ALICIA FAWCETT: Great. Thank you so much. I really appreciate your input. I look forward to your new projects and ideas.
JASON HEALEY: Thank you to you and the Council.