Speech Police: The Global Struggle to Govern the Internet, with David Kaye

Jun 13, 2019

The original idea of the Internet was for it to be a "free speech nirvana," but in 2019, the reality is quite different. Authoritarians spread disinformation and extremists incite hatred, often on the huge, U.S.-based platforms, YouTube, Facebook, and Twitter. David Kaye, UN special rapporteur on freedom of opinion & expression, details the different approaches to these issues in Europe and the United States and looks for solutions in this informed and important talk.

JOANNE MYERS: Good morning. I'm Joanne Myers, and on behalf of the Carnegie Council I'd like to thank you for beginning your day with us.

Our speaker is David Kaye, one of the world's leading voices on human rights in the digital age. He will be discussing his recent book, entitled Speech Police: The Global Struggle to Govern the Internet.

It's no secret that we are at a critical juncture in one of the most consequential free speech debates in human history. It is a debate that impacts politics, economics, national security, and civil liberties as governments, platforms, and activists act to control online discourse. The speed with which the Internet has evolved has led to not only how billions of people interact but how they see the world. One could argue that this medium, more than any other communication development of the past century, has transformed the manner in which information is exchanged and business is conducted.

But it has not always been for the public good. While the Internet was designed to be a kind of free speech paradise, over the past decade giant companies, authoritarian states, and criminals have colonized vast tracts of the digital space. This has given rise to fake news and disinformation. These nefarious activities have corrupted our elections and our governments, often inciting violence and promoting hate.

But because the Internet is a medium that involves so many different actors and moving parts and is not controlled by any one centralized system, organization, or governing body, regulating it has given rise to all manner of free speech issues and cybersecurity concerns. While it may make personal life just a little more pleasurable, it makes democracy a lot more challenging, raising many legal, moral, and ethical concerns. The major challenge is how to curtail hate speech on the Internet while maintaining freedom of speech.

Should governments set the rules, or should companies such as Facebook, Google, YouTube, and Twitter be permitted to moderate their space as they see fit? Who should decide whether content should be removed from platforms or which users should be asked to leave? As the UN special rapporteur on the promotion and protection of the right to freedom of opinion and expression, David has dealt with these issues on a daily basis, putting him at the forefront of this global discussion, working toward a satisfactory end.

To explain the myriad challenges in reaching an agreement over Internet governance, please join me in giving a warm welcome to our speaker today, David Kaye.

DAVID KAYE: Great work, Joanne. Thank you so much for your kind words. I feel like you've encapsulated much of the purpose of the book, so thank you very much for doing that, and thanks to all of you for coming out early on—I'm never sure what day it has been this week, but I think it's a Thursday. Maybe you all are in the same boat on that. Thank you so much for coming out this morning.

I'm going to talk today a little bit, and I do want to emphasize a "little bit" because I think this is a topic which is in many respects remarkable in the way it has captured public attention. When we talk about who controls speech online I think of it as, "What's the trauma of the morning?" It's almost like watching the Trump administration as well. It's what that morning has arisen overnight that has caused some trauma, whether it's on Facebook or YouTube or Twitter or any number of other platforms around the world.

The way I want to structure my discussion of this is to talk about basically three things. First, I want to say a few words about how the companies have come to dominate public space in so many places around the world. What is the nature of platform power?

I'll start there, and then I'll talk a little bit about how governments around the world—although interestingly, not the U.S. government, this has been basically in the context of regulatory discussions—a question about European and then beyond Europe authoritarian regimes often jealously looking at the power of companies and trying to figure out how to pull them back, how to reclaim public space for their publics.

Then, in the third part of the discussion I want to give some ideas about how to solve the problems, but I use the word "solve" with some caution I should say because although I don't want to start with any negativism or any kind of cynicism or deep skepticism, I do think that this is a problem that is going to be a generational problem for us to resolve, whether we're talking about disinformation or hate speech or even terrorist content. We have some extremely serious challenges ahead of us that are challenges about public policy, they're challenges about the nature of expression in a digital age, and they're also challenges about the nature of regulation and what we think of the mutual roles of government and the corporate sector in the world in which we live today.

That's the overview, and I hope that gives you a little bit of a roadmap of where I'll be heading this morning.

Let's start with the platforms. In the book I start with several examples of platform power. The platforms, if you look at some of the stories of the most recent years, these are stories that have been headline news for many of us. They've been column-one stories in The New York Times, The Washington Post, The Wall Street Journal, and in newspapers around the world. This is really a globally focused problem.

Think about some of them that have been most evident to us. Think about Myanmar. In Myanmar, as an example, Facebook is the Internet, but it's not just the Internet. Facebook is also the public square in a way that is often hard for Americans to completely conceptualize, and I put myself in that camp also.

In the United States we might think of Facebook as one platform among many. The idea of—it's a cross-platform metaphor, but #DeleteFacebook, that sounds like something that we as Americans can do or as Europeans or people in other developed media spaces, let's say. It's not all that hard to leave Facebook or to leave Twitter. You can go somewhere else. You could go maybe to Google or to YouTube. You can also get your information from the independent media in the United States or in Europe or elsewhere. But if you're in a place like Myanmar or the Philippines, or many, many other, let's say jurisdictions, Facebook is not just where you are communicating with your high school friends. This is the place where people are getting independent information. It might not always be accurate information, but this is the place where they're getting information.

The dominance of the platforms is not just about dominating social media, it's about dominating and having a major impact on public space. So the platforms have this enormous power. They have it in Myanmar, and they have it in many other places.

We should also step back for a moment and realize that the development of this power has been explosive. Facebook is 15 years old. Twitter and YouTube all were born basically in 2005, 2006, and 2007. These are very new platforms. This has suddenly happened, and they started early in their tenures, in their existence, as having relatively modest rules. If you look back at the rules for what you can post online from the earliest days, they are exactly what you would imagine basically some tech bros in Silicon Valley would create, or according to Mark Zuckerberg's origin story, what some smart, young guys in a dorm room in Harvard might consider to be appropriate rules.

But over the last several years, and in particular over the last I would say five to seven years, the companies have developed not only this power actually out in the real world, but they've developed into significant bureaucracies. They are governing institutions.

I know that when we look at anything from even the last couple of weeks—think about the doctored video of Nancy Pelosi—it appears from the outside that the companies are doing this by the seat of their pants: "Oh, there's this doctored video. Oh, there's this massive clamor for it to be taken down." So they respond or they don't respond, and there was some variation in how they responded.

But the truth of the matter is the companies have very detailed rules, and they have extensive bureaucracies for deciding what the rules are. At Facebook they have something which I observed last year that they call their "mini-legislative forum," where they actually gather together like a legislature. You can almost close your eyes and imagine what a Silicon Valley legislature looks like. They're a lot younger than us mainly.

They are making rules, they're making decisions based upon highly bureaucratic structures, and then they're implementing those rules. They have rule-making teams. Facebook is the most bureaucratic of the companies, but they all do this at some level. They have thousands upon thousands of what they call "content moderators," people around the world who are actually deciding based on these rules what comes down, what goes up, what should be deleted, and what should be ignored.

They have escalation procedures, procedures for appeal. If, say, you've asked—and many of you might have done this before. When you're online, you see that there's a little place you can flag content, you can report it to the company. People do that all the time. Sometimes content is flagged to the companies through algorithmic decision making.

Think about the amount of content that is actually uploaded to these platforms. It's absolutely unfathomable. YouTube, for example, is up to 500 hours of video uploaded per minute, which is just crazy. The ability of the companies to actually moderate that content required rules. It required a form of bureaucracy. I think it's important for us to see the companies as actual governors of expression around the world, and that has more or less impact around the world.

Our discussion in the United States is different than the discussion in Myanmar, is different from the discussion in Germany, France, or the United Kingdom, different from the discussion in New Zealand, where there are different kinds of traumas that we see from different kinds of technologies, but all rotating around these platforms, and they have this massive power to handle this work.

That's the first part. I think we need to really conceive of the companies as real adjudicators of expression in the digital age in a way that I think makes the discussion around whether the companies, the platforms in particular, are publishers and editors or they are mere conduits, just hosts of information. That discussion doesn't really quite capture what is happening on social media space. It's the impact and the power of the companies that I think we need to be focusing on and asking what should we do about that. That's the first laying out of what the companies are doing.

Go back a few years. Imagine that you're in Germany, and it's 2015. What's happening in 2015? It's not really about social media. It's not really about media at all. It's about Chancellor Angela Merkel making this really remarkable decision—which to my mind is the most striking example of how Germany is different than what many of us might have thought of Germany growing up really since maybe Willy Brandt was in Warsaw and dropped to his knees—that Germany would welcome in 1 million refugees, migrants, and others basically coming from the war in Syria, the war in Libya, and other places of trauma around the world. It was a major statement, but it was also more than a statement. It wasn't just symbolic, it was an actual policy that was changing Germany in many respects, and it reflected a changed Germany. In general, certainly from outside of Germany, there was this deep respect for that.

But what was happening in Germany at the time? Think back to the summer of 2015. You might think of that summer, you might remember some of those images. You might remember the image of a Hungarian camerawoman kicking migrants who were coming across the border into Hungary; you might remember the image of Alan Kurdi, the young boy, three years old, who was killed in his effort to cross the Mediterranean in order to get to Europe, face down on the beach. These were the images of that summer.

At the same time, many on the far right, the neo-Nazi right, and this strongly anti-immigrant right in Germany were starting to use Facebook and to a certain extent YouTube and Twitter in order to incite hatred against the immigrants, the migrants, the refugees who were coming into Germany.

At the time, the minister of sustice—who is now the foreign minister of Germany, Heiko Maas—he saw that, and he started to get concerned in particular because many of his constituents, many Germans, were writing to him and saying, "Why is it that Facebook will take down a female nipple, but they won't take down incitement to hatred? What is that about? That isn't German values. That isn't German law."

German law has I wouldn't say the strictest but has very significant longstanding rules around the dissemination of hate speech, the dissemination of images of the Nazis, around criminal insults as well. Heiko Maas saw this. He was under enormous pressure, but he was also himself I think extremely concerned about the use of Facebook in particular in order to incite violence and incite hatred.

He wrote to Facebook and said, "What's going on? You are not doing what you need to do. You're not even addressing your own hate speech rules in order to deal with this major problem that is affecting our country. This is a public policy problem for us. This isn't just a problem of what kind of content is on your platform, this is about what are you doing as a platform in order to deal with the kind of content that is inciting violence in our streets"—there were firebombings of refugee centers, there were attacks on specific refugees and non-refugees. Germany is a diverse country. It's not as if someone on the far right can distinguish who's a refugee and who is a citizen of the country.

In that environment, Heiko Maas said to the companies—and this was basically to Twitter, YouTube—owned by Google—Facebook, and Microsoft at the time, invited them in and said, "Look, we need to fix this problem." By "fix this problem" he meant, "You need to do what I tell you to do," and so they created basically a nonbinding code of conduct that the companies were meant to follow. It was around hate speech.

In this nonbinding code of conduct the companies agreed essentially to take down content that constituted hate speech under German law. That was their commitment. It was a voluntary commitment, a set of voluntary guidelines, and over time Heiko Maas and his team at the Ministry of Justice in Berlin thought Facebook in particular had betrayed them, that they weren't doing what they committed themselves to do on a voluntary basis.

Rather quickly, by early 2017—this nonbinding set of guidelines had been in effect for about a year—Heiko Maas didn't see much movement or incremental movement at best, and he told his team at the Ministry of Justice, "You know, this soft law isn't doing the work. We need hard law. We need the hard edge of enforcement under German law."

What he did was he moved toward what we now know as the Network Enforcement Act—in German it would probably take me seven minutes to say it or pronounce it. We know it as NetzDG (Netzwerkdurchsetzungsgesetz). It's the Network Enforcement Act in Germany. NetzDG basically says to the companies, "If you're of a certain size, if you make a certain amount of income annually, you'll be subject to these rules, and these rules mean: One, you need to be transparent about what you're doing. You need to report to us, the government, every year"—actually, I think it's twice a year—"about the nature of your takedowns of content. What are you doing to evaluate German laws?"

"Second, if you do not implement"—and this was the hardest edge of it—"a new set of rules around hate speech and other rules of German law systematically, you'll be subjected to fines that could go up to €50 million per example." So they went from this nonbinding set of norms to this real incentivizing of taking down content.

This German model—the transparency part is good, and I'll talk a little bit about transparency when I talk about some solutions—this hard-edged, sanction-oriented model is a model that has been seen as very attractive around the world. That part of the NetzDG model is basically spreading. It's traveling around the world.

It has been somewhat resisted in Brussels, so the European Commission has been mainly focused on the nonbinding approach similar to the first part of Heiko Maas's effort, but that might be moving. There's a tremendous amount of pressure on the European Union to do something about hate speech, although after the parliamentary elections in Europe last week it's unclear exactly the direction that will take, but there is enormous pressure from the leadership in Paris, in London, and in Berlin to do something stronger at the European level. So that's happening.

One of the troubling parts of this move toward regulation in Europe—although I want to caveat this by saying the debate in Europe is rich, it's important, it's completely legitimate. It's a debate that we're not having in the United States at all. We've started to talk about competition policy, but there's really no discussion in Washington around content issues.

But the policy around Europe has had the following effect: On the one hand, on the outside it looks like Europe is basically saying we're going to rein in the companies, and at some level that is the intention. They're saying to the companies, "We want you to abide by our rules, by our laws. You're American companies." As one European Commission official put it to me, "You are profit-making beasts. You don't have our interests in mind." As a European parliamentarian said on the floor of the European Parliament, "We don't want Mark Zuckerberg to be the designer of our reality." That's a very real feeling of alienation across Europe that these American companies are doing these things that have a deep impact on their information environment.

The paradoxical thing about this policy is that it's increasing the power of the companies, because what it has done is first of all it has increased the costs of implementation, so only the wealthiest companies, the Facebooks, the YouTubes, the Twitters—well, Twitter isn't that wealthy, but the other two are—can actually afford to implement these rules.

They're in a sense locking in their power because it's very hard for some start-up to develop in that space. It's not to say it's not possible. MySpace is a good example, a kind of cautionary tale for any of these companies, but these are massive companies, and they have essentially cornered the market in many of these places. They are dominant in Europe.

But the other part of it is that the European rules say to the companies, "This is the law. We want you to implement it, and we want you to adjudicate it. We want you to make the decisions about what is and is not consistent with our law," and there's essentially no public oversight of that. There are no courts. There is no independent public agency that is intervening in order to say, "No, Facebook, you got it wrong. That is not German law." That isn't happening. That might be starting, but the jurisprudence doesn't really allow that because under European law, similar to American law, basically the companies are immune from liability for their decisions about takedown questions, about whether they leave content up or take it down.

So the power of the companies has paradoxically increased, and Europe, even though they're having this rich debate, has hardly even scratched the surface of a creative debate, a sort of creativity around what it means to deal with content in the digital age. What does it mean to have democratic control of public space in this era where the companies have so much power? Is there a way to basically interject our public institutions into this private sphere? That has not happened yet in Europe.

That's in a nutshell the challenge that we face around the world, particularly—and I've obviously described it mainly as a challenge in the democratic space, in the democratic world.

I just want to say very briefly a word about the way in which the debate in Europe and the debate in the United States to a certain extent has had real negative impact on others around the world who might not see themselves as committed to democratic principles and to principles of freedom of expression as Europe is.

Basically, because the debate in Europe—if you think about it, and I think Joanne used the word or the phrase thinking about the original idea of the Internet as kind of a free speech nirvana of some sort, this paradise for democratic speech, certainly for cheap speech, but originally the idea was, "The Internet is great for speech. It's openness. It's going to democratize everything." But as you think about what the rhetoric has been over the last several years it's mostly about the dark sides of the Internet. When is the last time you saw an op-ed in The New York Times about the wonders of seeking information using Google search? You haven't seen that recently.

What you have seen instead is a basic rhetoric of the dangers of speech online. That's essentially where we are today. Most of the discussion is around how do we govern the Internet, and in fact that's the subtitle of the book. It's about the governance of the Internet. It's not about expanding individual freedom, it's about how to govern this space.

That rhetoric has actually been adopted around the world. It's not to say that Russia wouldn't still adopt its rules against expression or that Singapore wouldn't adopt, as it did last month, a new law on what it calls "online falsehoods," which criminalizes speech as Singapore already had criminalized speech, but it also puts enormous pressure on the companies to basically correct information online at the direction of any particular member of the government.

That rhetoric has again not created that move, but it has given cover to authoritarians around the world to also see the dangers of the Internet, to also criminalize the dissemination of false information. I haven't given—and I wouldn't give—a guidebook as to how to deal in particular with hate speech or disinformation in a jurisprudential sense, but all of these countries on the periphery of Europe but also in core areas around the world are benefiting from this rhetoric and pushing for more and more restrictions on online speech.

At a certain level I don't really care about the companies. What I do care about, and what I think we all should care about, is the ability of the companies to continue to be places for robust debate, for individual expression, for criticism of government, and that is what is being lost bit by bit over time in this environment of a rhetoric of danger.

I'll close with just a couple of points about solutions. Like I said at the beginning I'm not really the solution guy in the sense that—I think there are important things that the companies and that governments can and should be doing, but as one European commissioner put it, "I don't think we can eradicate hate speech. I don't think we can eradicate disinformation." We should be looking at solutions or tools that would mitigate the risks of these things, that would mitigate the impact of hate speech, but it's impossible to eliminate it entirely. The nature of speech and particularly the nature of expression at a time when there are so many calls for expression to be restricted is that it has become a kind of cat-and-mouse game.

We see this in China as an example. We see that people in a real deep environment of repression use all sorts of code in order to get around the censors, whether it's the Winnie the Pooh meme, if you've seen that with respect to President Xi or it's any other types of tools that the Chinese will use in order to creatively get around censorship, that's what we'll see, and it'll be a cat-and-mouse game forever. So we won't eradicate it.

But I do think there are at least two or three steps, and then I'll finish up. One is transparency. Transparency as a mantra sounds meaningless, but what I mean by transparency is that the companies need to disclose much more about not only their rule-making process but also the nature of their enforcement.

We need something essentially like a case law of their work, the case law of their enforcement, because if you think about it right now—and this is the problem of consistency—we don't really know from day to day, and you could see it every morning, there's a story about YouTube's new rules that were announced yesterday—what the impact of a particular rule will be or the result of particular content whether it will be allowed to stay up or be taken down. We are in basically an asymmetrical environment, and so it's very difficult for us to have a robust debate when we don't have full information about how the companies are actually implementing their rules. That's one step, again not a total solution, but it is one part of the puzzle.

We also need more transparency from governments. Governments increasingly are also making demands on the companies to take down content. That we all know.

But one thing that is relatively hidden from public view is how law enforcement around the world, they don't go through their courts, they pick up the phone, they call the local Facebook or YouTube representative. It's funny-not-funny. They might send an email to the same address that we would send an email to back at Menlo Park or San Bruno or San Francisco and say, "Please, I'm writing you from Scotland Yard, dear sir, and take this content down."

That's happening. Those demands are happening. Companies have started to be more transparent about what they're receiving in the aggregate, but governments are not transparent about that at all, so we need to make demands that governments are more transparent about their demands, and also that those demands don't go through these extralegal formats, but they go through their courts. If you're going to be using the tools of government to take down expression, it should be according to legitimate public process.

Another tool I think that would be really valuable for the companies is essentially to junk their terms of service around content and rearticulate them as rooted in human rights law. This is where I'll say just a concluding word. Basically, the companies now, their rules are all oriented around their terms of service, which are essentially contract, and it's really about the discretion of the companies to determine what can be on the platform and what cannot.

That puts them in a relatively difficult position when it comes to governments that say, "Well, this isn't about our law, but we don't like this content for whatever reason. It would interfere with national security or public order. We want you to take it down because, by the way, it's inconsistent with your terms of service."

What does a company do in that situation? They could evaluate their terms of service and say yes or no. It's up to them, it's their discretion. Normally, it's going to be very hard for them to say no to a government, and that's legitimate, and that's also something we should not be surprised at. Governments want to have control over their own jurisdictions. Government is jealous of its power. That's the way the world works.

But when the companies go back to these countries, to these governments, it's not a real strong argument to say, "You know, our terms of service say this. This is about our business model. We can't really take this down. It wouldn't work with our business."

But if they were able to go back to countries and say, "Look, part of our role here is to protect your citizens, to protect our users when their freedom of expression is interfered with." That at least raises the stakes for the country.

I am not naïve about this. It's not like a country like Turkey is going to say, "Oh, wow. Human rights standards. Sorry, we didn't mean to make this demand of you." But it will create a kind of layer, and we've learned—I think this is something we do know—that any bit of friction between the demand for a takedown, and the company responds. Any bit of friction there can actually slow down the process of censorship.

It might not always work, but giving them that tool I think would be really important, and it would give us a tool to make demands of the companies and governments which I think many of us and many of you perhaps in the Internet freedom community have been making over the last several years, which is we want the companies and governments to observe the human rights of their users.

Ten years ago this would not be an issue. The companies were not that powerful. We lived in an age of a blogosphere, where you'd go from blog to blog, from link to link to link until suddenly how did you get there, you don't know, but it was a curiosity, and you enriched yourself, and you had access to information and all of that. Everything is centralized now. It's centralized on these few platforms. They've become ripe for censorship, whether it's from governments or it's from companies that are moderating huge amounts of public space. It's an environment that we need to get a hold of very quickly. We need to be debating this. We need to have this at the center of our discussions around what the future of freedom of expression should mean in the digital age.

Thank you.

Questions

QUESTION: Thank you very much for your presentation. As you know, America has a very different attitude toward hate speech than Europe, and basically things that would be banned there are not banned here.

I'm more interested now in the current effort in Congress to finally do something about the economic power of these institutions. Do you think that the antitrust measures that the Trump administration is now considering are likely to succeed?

Two, can you see Congress in any way limiting or shaping the power of these companies to control content, which for me, as someone who has actually been censored by Facebook, had something taken down, viewed as inappropriate, which was a five-minute discussion of whether or not we were lied into the war in Iraq. That was taken down as inappropriate. We've been unable, we're suing, Prager is suing Facebook to get that restored.

DAVID KAYE: But . . .

QUESTIONER: But do either of these routes, through antitrust or what's going on on the Hill, have any chance of beginning to come to grips with the issues you've raised in your excellent speech?

DAVID KAYE: First, thank you. Second, no.

I should say more than that. Without taking a position on antitrust, I think that solves a set of problems. But even the people who are promoting it, the most thoughtful people who are promoting antitrust I think acknowledge that it's not a solution for the content issues.

If we think about Facebook, and we think about Chris Hughes's op-ed calling for a breakup of Facebook, the idea generally as far as I understand it is to separate Instagram and WhatsApp from Facebook, which might go a long way, maybe particularly in the United States, around limiting the power of the company. But the huge public forums that they've created on each of those platforms won't necessarily dissolve simply because they're broken up, and I think this is particularly true—and this is where I have concerns around the antitrust discussion.

You mentioned the Iraq War, so I'll mention Colin Powell, and his Pottery Barn rule, which apparently was not a Pottery Barn rule—"You break it, you own it." There should be a corollary, which is, "You built it, you own it." The companies have built these massive structures, massive spaces for public debate all over the world, and one question I have about antitrust and the debate in the United States is: Okay, you could do this for a relatively small part of these companies' market. Eighty-five percent of Facebook's market, its user base, is overseas. So what does it mean to have an antitrust approach when the real issue for so many people around the world is the power of the platform to moderate content?

Are there impacts of an antitrust approach in the United States that might incidentally—and I think unintentionally—interfere with the public space in other parts of the world? For example, if you break up the companies, does that mean that WhatsApp, as a really good example of both an incredibly useful and a problematic tool for communication, which is free, does that change the business model for WhatsApp? Does that mean that people around the world suddenly need to pay for it? It has been a real tool for people in poverty. So, how do we conceive of our approach as having a massive global impact?

I guess what I'm most concerned about is American legislators, who have not shown themselves to be serious over the last couple of years, or at least over the last couple of years, on this issue—the dysfunction there gives me pause as to whether they can do it without mucking around the world and causing additional harm.

I did want to say one other point, though, about the hate speech question. Our norms, as a matter of First Amendment law, are different from European norms. There is quite a bit of similarity. There's more similarity than I think we often acknowledge. But one of the interesting things about the platforms and our reaction to the platforms is how much there is a clamoring in our public for the taking down of hate speech. Remember, "hate speech" is not a defined concept in law. It's a shorthand for something else.

But what does it exactly mean, and I think that's a very difficult question. But, putting that aside, there may be a difference between our legal norms and our social norms, and the social norms in the United States at this particular moment may not be all that different from European norms, so as the companies move to restrict more and more what they call hate speech or hateful conduct, I don't know if there will be much of a real difference between European and American attitudes.

QUESTION: Thank you very much, and David, thank you very much for an amazing outline of the issue. I'm Craig Hawke, I'm the New Zealand ambassador here at the United Nations. Thank you very much, Joanne, and the Carnegie Council for hosting us this morning.

I guess this is very raw and personal for New Zealanders after the attack in Christchurch. We as a country are very much around freedom of expression, and we've been grappling with this issue in the sense of the public generally feel outraged that the attack was on Facebook Live, and a perception that the company did not do anything about this.

I just wonder if there—and there is no definition of hate speech, we know that. But there's a graduation of really our sort of ethics and morality in this, and New Zealand along with a number of other countries agreed a Christchurch Call, and this was not about taking down hate speech, but it was around online violent extremism. Since that, if the companies cannot grapple with this, they're going to lose in the court of public opinion, and then it's going to give power to governments to regulate.

So my question—and you touched on it—is around the transparency. Are these companies doing a good job? That group of young people sitting there, you gave it lots of governmental-type names; it's very opaque. What does that look like? I guess our view is that these companies need to become much more transparent around that. So my question is, are they doing a good job?

Second, they're losing in the court of public opinion more around privacy and data invasion rather than the hate speech. The hate speech I think came later. What can these companies do to improve that public profile?

Finally, just on the institutions, are we missing governmental institutions—and I know there's different views in different countries around the role of government, but you mentioned some of those tools—that would provide some transparency, some accountability, both at the governmental level, too, so we as a public can have faith in our institutions to manage this carefully? Thank you.

DAVID KAYE: Yes, great. Ambassador, thank you for those questions.

The deep tragedy of the Christchurch massacre I think really does focus our attention on the way in which the platforms can be abused. Fundamentally, this is a crime of an individual against specific Muslim worshipers in Christchurch. That's the fundamental part.

The question then is, to what extent should that ability of this one depraved individual, to what extent should that kind of content be allowed on the platform? I think that by and large I'm guessing we would agree, although I don't want to make any assumptions, and that most people would agree and even the Facebook rules on the content itself would agree that that is the kind of content that should not be on the platform and that every measure should be taken in order to resist the ability of bad actors to incite violence. That's the overall approach and the idea, and I think the public policy question.

But then you asked some really hard questions, I think. I'll try to answer them each in turn. One is, are the companies doing what they should be doing here? They're incredibly opaque. They are not doing enough, at least in terms of disclosing to the public how they conceive of these problems and how they implement them, and how they enforce their rules. I think they have failed.

That feeds into your third question, which is the public's view of the companies is extremely—by the "public" for the moment I'm talking about democratic publics and democratic space because there's a little bit of a difference which I do want to say something about about spaces outside of democratic countries like in the Pacific or in Asia or the United States or Europe. But I do think that the lack of transparency, the opacity of the companies, and the sense that they have become forums for bad actors, is a real toxic brew for the companies, and there's extreme alienation from the companies.

On the other hand, the companies are remarkably popular. The public is fickle. It's true. I have teenagers. They're not really on Facebook. But around the world Facebook is incredibly popular still, even in the places where it causes harm. I think that combination is cautionary for us.

In thinking about that, maybe focusing in on the Christchurch Call a little bit and the question of live streaming, the question that I really want people to think a little bit about and for governments and the companies and the public to grapple with is that everything involves tradeoffs, and we should just be honest about the tradeoffs.

Live streaming, for some narrow quantity and narrow kinds of content, like this "abhorrent violence," as the new Australian law puts it, is a very, very tiny percentage of what's live streamed, but it's abhorrent, and it can cause real—this is I think an empirical question that deserves a lot more study—but it appears to be able to incite violence and to incite hatred and discrimination.

We could conceive of that as the core problem, and we could imagine saying, "Well, then, any kind of live streaming should be subject to a one-hour delay," like in the old days if you were trying to watch a Knicks game in New York, there might be a little bit of a delay. I'm not from New York. I'm a Lakers fan, just to be clear about it, although that's traumatic, too, these days. But you could say as a rule that would be our technical solution, and then that will give us time, although I don't know if it would give enough time, to evaluate content and decide, "Okay, this is violent. This is extremist violence, however we define it, and that should come down."

The problem—and this is the tradeoff, this is why I think it's a public policy question as much as it is a company question—is live streaming does have value in some environments, and it has value for the dissemination of live public protest, which can be really timely.

You could imagine the other day the government of Sudan killed—and we don't know the numbers yet, but probably well over a hundred protesters in the context of the efforts at a transition to democratic rule in Sudan. I think many people around the world and in Sudan would benefit hugely from having that live streamed. To have that in the moment and governments in the moment—at least democratic-oriented governments or governments with some leverage—could put pressure on the government at that moment to save lives, to stop what they're doing.

If we have a discussion around live streaming that is only about the harms, we might lose that. That's okay. I feel like that's a public discussion to have and a public decision, and we could also think about it in many other contexts. We could think about it in the context of police abuse, where we've had some very striking examples of live streaming being really valuable to our public debate.

I think the Christchurch Call is really important in the sense that it basically says this is a public question that needs to be resolved not merely by the companies, although the companies have a major role to play, but it is of major concern to governments and to those who care about public policy, and it should be a public discussion. My hope is that that will trigger a broader discussion about how we think about making public decision making genuinely involving all the relevant stakeholders. At the moment, we don't have that. My hope is that if there's anything good that comes out of the last few months that maybe that would be a sort of an impact from the Christchurch Call.

QUESTION: My name is John Wallach. I'm a political theorist from the City University of New York.

I have a question about one thing that you did say and one thing that you haven't yet said. The thing about what you did say was about the paradoxical result of the German regulation and how that increased the power of government. I wonder if you could just say a bit more about that because I wasn't completely clear.

The other thing is about measuring the impact of these platforms. For example, if you were talking about hateful speech, Facebook doesn't hold a candle to the president of the United States. What do we talk about when we're talking about measuring the impact of these things such that it involves legal regulation? In particular, is there anything new legally that you've found in terms of understanding the relationship between hate speech and hate crimes like the ambassador was talking about?

DAVID KAYE: Those are great questions.

On the first part, when I was talking about was paradoxically increasing the power of the companies in the sense that they are now adjudicators of law in these places. The paradoxical impact of the German law, and the reason why it's paradoxical in my mind is it's an effort to restrain the companies, and yet it gives the companies this new power over public space. That's the increase in power. There's also a more economic increasing of power that it makes it harder for new entrants into the market, so it kind of locks in their market power in a way. That's what I was thinking in terms of the increase in power.

Your second question is really interesting, and I think really, really important because one of the things I did not talk about is the way in which some of the content that we think of as the worst kind of content actually goes viral. An example is the Nancy Pelosi video. The statements we've had I think from Facebook have been that until Fox News and the president of the United States picked it up they hadn't really seen it viral on the platform. It was only when we had the president re-tweet some version of it and we had Fox News do a story about it that it started to get into the millions. Once it's in the traditional media and the broadcast media, it has an entirely different valence.

I think this is a really important point in the sense that the companies are not operating in a vacuum. They are operating—particularly in the United States, because it is different in other parts of the world—in an environment where there are political actors who should know better and who are basically helping the virality, they are amplifying the voices of bad actors, they're amplifying disinformation, they're amplifying hate speech. They're amplifying all sorts of content that we might think of as harmful to our public debate or to, say, elections, or whatever it might be.

I think you're asking a really good question, and there is not enough data around this. One of the reasons there's not enough data—it's not because academics don't care about it. The companies, in particular Facebook, have been extremely resistant to opening up their platforms to academic research. You ask anybody who does research on social media, really to a person they will tell you that it has been extraordinarily difficult to get access to the companies to do the kind of research that I think you're suggesting would be important, and it would be important to have that research because then we would know, well, what's the impact of a particular rule? What's the impact of essentially law on behavior, and we just don't have a lot of that kind of data. One of the calls that I've made separately and a little bit in the book is that companies need to open up to research.

I'll finish this with one slight problem, which is privacy law. The European General Directive on Privacy, the GDPR, has made it harder, at least for the companies as they perceive their ability to share information because they feel that they are constrained now by new privacy rules about what they can share with researchers or others. They are probably right. There are new liabilities in European law that cause them problems.

The privacy focus—which we also saw in the context of the "right to be forgotten" in Europe—has real implications for freedom of expression on the platforms as well and what we can learn from the platforms.

QUESTION: Hi, Michael Call [phonetic] from Met Capital.

You presented the U.S. platforms as sort of being dominant around the world, but we've seen WeChat and TikTok take a tremendous share in some of these international markets. I'm just curious how they differ, and what you think the implications are of that.

DAVID KAYE: This is a book about the American platforms. I was most interested in the book about the regulatory debates in the democratic world. I devote a little bit to Singapore and Kenya, which are not—Freedom House calls them "partly free." So I was very much focused on those.

But it's true. There's WeChat in China. It's a microblogging platform—I don't speak or read Chinese—but Chinese experts tell me it's a much richer microblogging environment because of the capacity of characters to say more. It's a much richer environment than Twitter, for example.

So you have WeChat. You have VK in Russia, which really does dominate Russian-speaking social media, but they are relatively limited to their space. I want to come back to something about China in a second, but basically they operate in a way that's not that much different from the American companies.

They both—VK and WeChat—have terms of service. VK's community rules are not all that different from, say, Facebook's, the difference being that in Russia they have a device that basically vacuums up all traffic so all content is available to the government directly, you don't have to rely on the companies to regulate. So that's a problem.

But the Chinese question I think is the most interesting in a way for a couple of reasons. One is I think it's interesting to look at their terms of service, their standards. One of their rules on the platform—you can imagine what the rule is that I'm going to suggest—is that you cannot be critical of the Communist Party of China. That's a generic rule. Obviously, WeChat has to have that rule because of Chinese law, which does not privilege freedom of expression by any measure. So there's this core interference with public debate. It's a robust platform from what I understand, but it has that kind of problem.

But here's where the global part of it I think is interesting. You're right. TikTok is becoming more of a platform that people use outside of China. WeChat—think about the Chinese diaspora, the diaspora of hundreds of thousands of students who are in American, European, Korean, Japanese, and other universities who are there to study, and they will go back to China. So China has increasingly expanded its reach to demand censorship of the platform beyond Chinese borders, so that if you're on WeChat there's much more of a—and we saw this in the run-up to the Tiananmen Square anniversary a couple of days ago, that WeChat is basically sanitizing more of its platform even when it comes to overseas posters.

Why is that? They don't want their users, their citizens who are overseas, to come back with this new knowledge of things that the government has been trying to hide from the country for many years.

Louisa Lim wrote this really great book about the kind of historic amnesia in the country that people don't even recognize the "Tank Man" picture in the country. They don't want this huge cohort of what's likely to be the elite in the country in many respects in the coming years to have that.

I think it's a little ridiculous because people who are coming to study in the United States have access to all sorts of information. On the other hand, from what I understand, oftentimes their information environment is still relatively limited, and so that kind of censorship is happening. My hope is that that kind of censorship won't expand if WeChat becomes popular as an English language or other linguistic tool.

We haven't talked about language here at all, by the way, just as a footnote. We've been talking about this in the context of how hard it is with this all this content now. Multiply it by the hundreds and hundreds of languages that people are using on these platforms. If we're talking about WeChat, yes, the future is a little bit disturbing in terms of the direction that it could go.

QUESTION: The question will be brief, but the answer's too complicated. What has it meant to be special rapporteur as an American, and what tensions have you faced internally? Have there been special expectations of you or special criticism of you because you're an American in this role?

DAVID KAYE: That's a great question. You're right. This is another 30 minutes maybe. But if I could just say, Monroe Price [who just asked the question] is one of the leading figures in the space of media and law, and I'm grateful that you're here, so thank you for the question.

QUESTIONER [Mr. Price]: That's why I asked the question.

DAVID KAYE: That's why you asked the question, right.

I'll say just a word on this. In the role of special rapporteur I monitor free speech issues around the world, and when I was first appointed, you're right, people were a little surprised that the Human Rights Council would appoint an American, somebody just like the people moderating content in Silicon Valley. We're all marinated in the First Amendment culture, and I think there was a concern that what I might bring to the position is this sense that people around the world have of American zealotry about free speech, like free speech at all costs.

You know this. The First Amendment is not a free-for-all. The First Amendment might say, "Congress shall make no law," but there are restrictions. There's time, place, and manner restrictions. We've had a history of broadcast restrictions. That might not be really evident today, but we've had that in the past. So I think that's one part I have tried to break down that kind of misunderstanding.

But the other part of it is—and I'll finish with this—that the role of special rapporteur is to focus in particular on human rights law, which provides in Article 19 of the Universal Declaration of Human Rights and in other instruments that everyone has the right to "seek, receive, and impart information and ideas of all kinds, regardless of frontiers, and through any media." There are restrictions that government may allow, but that's a vocabulary that people speak all over the world.

I really do get kind of exhausted by discussions about, "Oh, human rights don't matter. Human rights is history. Human rights is dead," all of that stuff, because around the world—and this has been the eye-opening thing for me—is that people really use these tools. They use them in their advocacy with governments, they use the vocabulary of human rights law.

I think I got over the initial, "Oh, you're an American. You must believe that free speech is First Amendment." I think I got over that by really just exclusively focusing on what is the jurisprudence, and there is rich jurisprudence around human rights law around the world in the European, the Inter-American, and the African contexts, by really just focusing on that as the place.

I would just close by saying I think that set of norms is missing from our own debate, our own understanding of what free expression can mean. It's not just about "Congress shall make no law", it's also about we as individuals have access to what is legitimately restricted under human rights law. I think if we do take a too-zealous approach we miss a lot of what's happening to people on the ground in real space and not just in virtual space.

For that reason, I think actually human rights law can play a really important role in helping us think through the problems of online speech.

JOANNE MYERS: On that note, I really want to thank you for engaging us to be active in continuing this discussion. So thank you very much, David.

DAVID KAYE: Thanks to all of you.

You may also like

DEC 2, 2024 Article

Global Ethics Day 2024 Reaches New Heights with Participation Across 70 Countries

On October 16, 2024, hundreds of organizations and thousands of individuals across nearly 70 countries celebrated the 11th annual Global Ethics Day.

NOV 21, 2024 Article

A New International Order Is Emerging, We Must Bring Our Principles With Us

On the heels of a new international order, Carnegie Council will continue to champion the vision of peace and cooperation that remains our mission.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.