Computational Propaganda, with Nick Monaco

Mar 20, 2019

In this in-depth conversation, Oxford Internet Institute researcher Nick Monaco reviews the history of computational propaganda (online disinformation), which goes back almost two decades and includes countries ranging from Mexico to South Korea. His topics include Russia's IRA (Internet Research Agency), the role of China's Huawei, and a recent case study on Taiwan, where "digital democracy meets automated autocracy."

DEVIN STEWART: Hi, I'm Devin Stewart here at Carnegie Council, and today I'm speaking with Nick Monaco. He's based in Washington, DC. He works with the Oxford Internet Institute and also with a company called Graphika. He was previously working with Google's Jigsaw and has also been recently featured on a PBS program on disinformation and fake news.

Nick Monaco, welcome to Carnegie Council in New York City.

NICK MONACO: Thanks. Super-excited to be here.

DEVIN STEWART: So, Oxford Internet Institute and Graphika, they're working together. Tell us about what you do and what do those two organizations do?

NICK MONACO: Sure. I'll just start at the beginning and give you the history of my involvement in studying political bots and disinformation.

I got my start in 2015 at the University of Washington in Seattle looking into political bots, so automated agents online, computer programs posing as people online that were spreading political messages. At the time, that was an arcane topic that no one had really heard of. I hadn't really heard a lot about bots at the time, and I got involved with The Computational Propaganda Project at the University of Washington at that time, looking into how political bots influence public opinion.

One year later, the 2016 presidential election happened here in the United States. Computational propaganda, disinformation, political bots became household names in a way that they hadn't been before. That's the beginning of my time looking at bots.

DEVIN STEWART: What inspired you to get involved in the first place?

NICK MONACO: I've always been interested in geopolitics. I'm a linguist by trade. I did my Master's at the University of Washington doing computational linguistics, so using programming and technology to look into the more mathematical or formulaic and formal parts of human language and seeing how you can use technology to analyze language.

At that time I met Phil Howard, who is now the director of the Oxford Internet Institute, who started The Computational Propaganda Project with Samuel Woolley. I got involved with them looking into these topics in the China region, actually, and also all around the world. Part of what I was doing at that time was just catching up with the long history that a lot of people aren't aware about of political bots and their usage and manipulation of political discourse in countries all around the world.

DEVIN STEWART: Can you give us a little sense because we haven't even gotten to Oxford and Graphika yet? But that's very interesting because I think a lot of people have the perception that political bots and disinformation is kind of recent. But I think Wikipedia even cites examples of disinformation happening thousands of years ago, for example. Can you give us just a little taste of the history that you're talking about?

NICK MONACO: It's one of my favorite bullet points to try to make people take home when I talk to them about this topic is basically that not only disinformation—if we're talking about disinformation in general, of course, that's probably since the beginning of time and the beginning of human communication, a very long history.

DEVIN STEWART: Probably about 100,000 years ago.

NICK MONACO: Yes.

But for online disinformation, the lifespan of that, the history of that is also a lot longer and more extensive than a lot of people are aware of. I'd say the early 2000s is the delineating time that that all began happening.

But in the early 2010s there is a cabal of academic researchers who have been preaching the dark gospel on this stuff for a while. I think Dr. Phil Howard is one of those; Katy Pearce, who is also at the University of Washington; a lot of scholars were paying attention to the fact that while a lot of people were evangelical and very uniformly blindly optimistic about the affordances of social media and technology to affect democratic good, there was also this darker side that existed in places like Azerbaijan and Mexico and South Korea, Ecuador at that time, Turkey, and Hungary.

DEVIN STEWART: So the early 2000s. What type of platform would this occur on back then?

NICK MONACO: I think blogs are a place where a lot of interesting activity has taken place. Actually, John Kelly, our CEO at Graphika, got his start analyzing the Iranian blogosphere and comparing blogs and seeing what kind of interesting coordinated activity and social network mapping could be done back then.

Around 2010, 2012, that's when I think the big inception of this stuff starts to take place. South Korea had a 2012 presidential election in which computational propaganda played a huge role. The national intelligence agencies were actually involved. Park Geun-hye, who won that election, essentially was supported by the incumbent party, of which she was a member, and the National Intelligence Service waged a promotion campaign for her on social media, promoting her.

The New York Times did some great pieces about this, but Western media in general seems to have overlooked the fact that it happened, but essentially over 1.2 million tweets were spread from the National Intelligence Service in Korea promoting Park Geun-hye and her party and denigrating her opponents as, among other things, North Korean sympathizers and all kinds of things.

It's big news. I can tell by the eyes you're making right now that you take this as seriously as I do.

DEVIN STEWART: This is really serious. She ended up being impeached, as we all know.

NICK MONACO: Yes, for other corruption charges. Intelligence agents in the National Intelligence Service have also been indicted on other related charges.

That was part of a lot of problems going on at the time, but I think that's really a place where Park Geun-hye won by a fairly slim margin. There was some room there that made it not super-close. But in a place like Korea, where you're winning by just a few million votes, which isn't much of the population, and you know that the National Intelligence Service has been involved in this campaign, which among other things employed trolls and bots. I just think that's a really evocative and interesting illustration of the fact that this stuff can have a big effect.

DEVIN STEWART: It sounds like that's a milestone.

NICK MONACO: For me. It's another one of the things I like to make people take home in addition to the generality that this has been going on for a long time, just the specifics of look at South Korea, look at Mexico, look at Ecuador, look at India in 2014. There are a lot of places where this has been taking place, where it has been part of an electioneering apparatus, so campaigning for highest office in the land using social media to, in some cases, just use fair digital campaigning, and in other areas it veers into more unethical practice, exposing Cambridge Analytica, exposing the things like party-sponsored trolling that becomes state-sponsored trolling, usage of political bots.

But this kind of thing has a pattern oftentimes of being something that's used for campaigning and operating in a gray area, eventually spilling into a black area of trolling and bots, and then becoming an incumbent state apparatus, which is then used for not only an internal electioneering and promotion of the person in office, but also a denigration of opponents and dissidents and sort of a gradual ruining of democratic discourse in the party. The Philippines is also a place where this has been notably prominent.

DEVIN STEWART: We've had episodes in this series which we call Information Warfare. Saiph Savage, for example, is an expert on Mexico's case.

NICK MONACO: Right. She's great.

DEVIN STEWART: She's great. And we've talked with people—we were actually just recently in Manila talking with people, including people from Rappler, about this issue.

Do you want to give a little plug about Oxford Institute and Graphika before we move on?

NICK MONACO: Definitely, yes.

I think the most interesting thing for both of those places is that they recently released a collective report on the Internet Research Agency (IRA), Russia's troll factory, and its interference in the 2016 presidential election. There were two reports that were commissioned by the Senate Intelligence Committee. I didn't personally work on these things, but some teams at Graphika and Oxford did. What they did is they had the data that was passed on to the Senate Intelligence Committee from the big platforms, Facebook and Twitter, and they analyzed that data.

They published these reports on the activity, and they said: "This is what we saw going on in 2016. This is how the IRA used and abused these platforms to affect political messaging that was divisive in discourse in the United States." That was a joint report published by Oxford and Graphika.

Beyond that, the Oxford Internet Institute is obviously at the cutting edge of—

DEVIN STEWART: Just to be clear, IRA stands for?

NICK MONACO: Internet Research Agency.

DEVIN STEWART: And it's located in—

NICK MONACO: St. Petersburg, Russia. I believe it recently moved from Savushkina Street, but it's still in St. Petersburg.

DEVIN STEWART: It has become part of the American vernacular these days.

NICK MONACO: It's one of the weird things about being in the field for what is considered a long amount of time, five years or so, is that, yes, these things that once were arcane and known only to you are now household names. When you say the IRA now people are probably more probable, given the current context, to think of Russia than they are of Ireland and that whole context.

DEVIN STEWART: Very interesting how things change.

Can you tell us a little bit about Google's Jigsaw, which is the one that used to be called Ideas? Is that correct?

NICK MONACO: I believe so, yes. With the Alphabet name change that happened recently, I think Jigsaw was also rebranded.

DEVIN STEWART: We actually had a chance to go visit them downtown in New York City and learn about their incredible mission. It was kind of mind-blowing to tell you the truth. I suspect that hardly anyone knows what they actually do. I was listening to this podcast. Can you give a sense of what Jigsaw tries to do, which is quite unbelievable?

NICK MONACO: Sure. I think a lot of non-governmental organizations (NGOs) are engaged in similar sorts of research into the intersection of human rights and technology and digital rights. Jigsaw does a lot of things, but I think the core of their mission is a human rights-focused research that's aimed at how can we use technology to promote human rights all around the world.

Even before this became a household topic I think Jigsaw was looking into these topics and thinking about, What kinds of things can we do to help use technology for good? and try to essentially eliminate some of the negative externalities that have come out of the development of technology and connectivity of the world.

One thing I think that is cool that Jigsaw has done is something called Project Shield. Project Shield is a tool that protects independent media outlets in vulnerable countries.

Something that happens quite often, a form of cyberattack is called DDoS attacks, so distributed denial-of-service attacks, and those are overloading the servers of an independent media outlet. It can be any server whatsoever, but oftentimes it overloads that server, so what happens is the website goes offline for a given media outlet, and then they're unable to serve their product to users, which means citizens are unable to read this media, which is quite crucial for them in places where the media landscape is under attack.

So, Project Shield is a way of rectifying that problem and providing free protection to entities that need it. You apply for this protection under Project Shield, and if you're approved Jigsaw protects these vulnerable outlets from these cyberattacks, which is a great way of preserving press freedom and freedom of expression in places where they're in need of that kind of protection.

DEVIN STEWART: I doubt many people know about this. This is really incredible. The way you describe it is very benign and politically correct.

From a non-Googler, someone from the outside like me, it's quite extraordinary to see that one of the most powerful companies in the world, Google, is essentially funding almost like an intelligence agency of sorts, using technology to fight back against authoritarian regimes.

NICK MONACO: I don't know if that's how they would describe it themselves, and I'm certainly not in a position to say.

DEVIN STEWART: You can't say yes or no.

NICK MONACO: But I think the central idea is to restore some balance by again eliminating these negative externalities that come from the abuse of digital technologies.

DEVIN STEWART: The funding—Jigsaw is not a profit leader. It's a loss leader, right? It's essentially funded by the rest of the company.

NICK MONACO: I believe so. I don't entirely know.

DEVIN STEWART: It's not like they're selling anything.

NICK MONACO: There's something else called—they have a proxy service, uProxy might be the name of it, and the idea is similar, helping people where the Internet is censored to access the full Internet and not be limited by, say, the Great Firewall in China or similar infrastructural setups.

DEVIN STEWART: It's interesting that a private company is engaged in that world.

NICK MONACO: It's fascinating. I think that as time goes on—I think Google was ahead of the curve in the sense of having a public-facing entity that was doing that kind of work. Companies are now being put in the position where they certainly are having to take public stances on these things, and we'll see if other companies take the model that Google has, actually having a dedicated public-facing entity that does this kind of work. But I think it will be harder not to take a stance.

DEVIN STEWART: Interesting.

You've talked about disinformation as a problem for democracies in general. I think you cited the World Economic Forum's warning that among the top 10 perils to society worldwide is disinformation.

Can you talk a little bit about how you see disinformation as a problem to societies, especially democratic societies, and then also a little bit about the tools that you study, the trolls and the bots and other tools?

NICK MONACO: Definitely. I think the central worries are both a coarsening and eventually a censoring of discourse.

One of my colleagues once termed it—originally it started with censorship by the elimination or prevention of information appearing, and now what this environment has come to be is an environment where censorship occurs by content production rather than elimination of content, just like the DDoS thing I described to you earlier, overloading a server with too much traffic. This is like an information-based distributed denial-of-service attack wherein citizens are so overwhelmed with information that it's hard to fish out what's relevant, or even more crucially what's true.

DEVIN STEWART: Russia uses this technique, apparently.

NICK MONACO: All kinds of places do. Russia is a place that in the early 2010s was using these kinds of things internally, and there are a lot of great articles about that from places like Global Voices. Adrian Chen at The New York Times did a great article, the original blockbuster blowout article on the Internet Research Agency, just called "The Agency" back in 2015.

DEVIN STEWART: Where did that appear?

NICK MONACO: The New York Times Magazine. That was in 2015 the biggest story in the American media about this thing. Miriam Elder at Buzzfeed was covering it before then, and in the Russian language a lot of people were covering the operations of this troll factory domestically.

Anyway, getting long-winded here, but Russia was using this on LiveJournal at home to steer political discourse and what citizens saw online and were able to glean about political discourse online, basically steer them in the direction they wanted them to go and overwhelm them with information. This happened on LiveJournal, a social media blogging platform that's popular in Russia.

It happened on municipal websites, and the operations there were a team of three at the troll agency would comment on forums on municipal websites. Someone would play the token naysayer and take a cynical stance on something that was posted, someone else would argue for it, and someone else would take the stereotypical troll stance of making fun of both of them and manipulating the discourse.

DEVIN STEWART: That's very clever.

NICK MONACO: The effect of this was that people didn't engage with the content anymore, and they didn't have a voice because there were these loud people who were manipulating the common sections. Eventually you can lead people to the conclusion that you want them to have.

DEVIN STEWART: These are Russian municipal sites?

NICK MONACO: Kind of a weird thing to wrap your head around, but yes.

DEVIN STEWART: So they're perfecting it at home?

NICK MONACO: Exactly. It wasn't only domestically focused, but it was very locally focused for a while.

Again, there's a lot of really great reporting on this. Lawrence Alexander is a data scientist based in the United Kingdom who has done some good writing at Global Voices on this, again Miriam Elder, Adrian Chen, a lot of people have been documenting this stuff for quite a while.

So yes, Russia uses stuff domestically, perfected it, then started trying it out abroad.

The IRA is not a well-oiled, perfect machine by any means. Other articles at Global Voices have done a great job of documenting the Agency's greatest misses, stuff like botched videos on YouTube that were faked, showing—I can't remember the specifics—soldiers in Ukraine opposing some sort of Dutch referendum on whether or not to include Ukraine in a business deal with the European Union. The Dutch had the say in this referendum on whether or not Ukraine could have a business deal with the European Union, and Russia was trying to sow division and make Dutch people vote no. They botched this video, and it was easy to trace the video back to the IRA and the way that they did it.

There are all kinds of incidents like this. There's an outfit called Bellingcat, which was founded by Eliot Higgins, another person from the United Kingdom, that does all sorts of great work uncovering among other things the Internet Research Agency's operations, also the GRU operations, but essentially taking advantage of these sloppy mistakes that actors make in public.

That's the cool thing about Bellingcap. They take open-source intelligence. They don't have platform access, they don't have intelligence community access or any kind of privileged access, they just know how to leverage publicly available information to uncover sometimes in the biggest cases government intelligence agencies' operations and reveal the identities of GRU agents, reveal lies from Russia about MH17. A lot of people are doing great work on this kind of stuff.

DEVIN STEWART: You started to touch on how effective or ineffective these techniques are. Could you give us a sense of the state of the art, the state of play of disinformation today and where it's heading?

You mentioned before we started the podcast talking about these topics with your friends and colleagues. I'm curious about what's your measure of the state of awareness in American society or just generally about the need to "do your research" on stories.

NICK MONACO: Those are two really big questions. Make sure to let me know if I forget to answer one of them.

On the current state of disinformation I am cautiously optimistic. One of the cool things about being involved for so many years is that we're in a much different situation than we were even just a few years ago, and part of that is public awareness, things like people knowing what DDoS attacks are, what bots are, and all that kind of stuff. But it's also about companies taking a stance and actively monitoring for these problems, addressing these problems, or talking about these problems. I think that the discourse going on both with the general public and with people working at these companies is positive and gradually, incrementally getting us where we need to be. There is a sense of urgency about it because it is a big problem, but I think the nature of the problem is one that is necessarily complex and thorny and something that's going to take a lot of time to figure out.

In the meantime, I think that people—both researchers and people within companies—are getting better at both monitoring for these kinds of things and knowing what kinds of tools they can use to again spot the fact that these information operations are taking place.

In terms of being a researcher there are great tools now for classifying bots online, so you can use a quantitative tool in addition to your own human intuition to figure out what the probability of a certain account is of being a bot, application programming interfaced (APIs).

DEVIN STEWART: What are the variables you look at, for example?

NICK MONACO: That's a great question. For bots, all kinds of things. The way machine learning works and these classifiers work is they take all kinds of little metrics and combine them, and then based on the training they figure out which ones are most predictive, which ones are most valuable for figuring out if an account is a bot or not. So, something like average tweets per day, let's say, would be very predictive in the case that there were like a thousand per day. If you see an account that's tweeting thousands of times per day, it's more likely than not automated. It's probably not a human.

DEVIN STEWART: So it's like a regression analysis.

NICK MONACO: That's exactly what it is.

DEVIN STEWART: Interesting.

NICK MONACO: The point of saying that is that I think that researchers, academics, companies, and the general public are getting better at both knowing there's a problem and knowing what tools are available to look into those problems. That doesn't mean that there's not a long way to go for all those parties I just mentioned.

But the future of disinformation? I think that with 2016 and the Cambridge Analytica scandal we've seen the fact that disinformation online is not an isolated problem in and of itself. Privacy feeds into it, data availability feeds into it, data policy feeds into it. There are data laws about how companies can use your information or how they store your information. It's vulnerable to hacking, to theft, which in turn can inform high-accuracy, high-precision disinformation campaigns that can be targeted with granular accuracy at an individual or a group of individuals.

I think it's useful to understand the interconnectedness of all kinds of digital rights problems and how they contribute to disinformation campaigns. In the future I think we're going to see more of that as the Internet of Things comes online and more data about individuals is available. I think it might make Cambridge Analytica look like the good old days in a way.

A lot of people are also quite concerned about deepfakes and the emergence of artificial intelligence (AI) that sort of produces messages.

DEVIN STEWART: Explain deepfake. That's a very interesting technique.

NICK MONACO: The idea about deepfakes is essentially that algorithms can produce videos on their own, so these are no longer videos that are taken in real life and manipulated by a human. They aren't even based in real life, they're just generated from an algorithm. They can depict essentially whatever you want them to.

DEVIN STEWART: One of the famous examples is Jordan Peele doing Obama. I guess for a layperson like me I think of it as using something like artificial intelligence to portray someone else, the appearance of someone else, right?

NICK MONACO: I think the canon example is making a president declare war.

DEVIN STEWART: Was it George Bush or—

NICK MONACO: I think there was one with Trump and Obama to some extent, but essentially like a nightmare scenario, sort of a Black Mirror-type thing, promoting a video, sending it to news outlets that's fake of the president declaring war or something like that.

I think journalists have a responsibility to know how to fact-check these things, how to verify things online. I think they do a good job, but that's going to be like continuing on-the-job training.

DEVIN STEWART: Deepfakes get a lot of attention. Do they deserve the attention they get, do you think?

NICK MONACO: That's a really great question. I think it's important that people think about it in the sense that it's always better to design something well in advance than to try to mitigate, ad hoc solutions after we have a problem as we've seen with disinformation and social media companies.

Part of those things are preventable with good design, and some of them aren't. You can't anticipate everything that's going to happen or go wrong with negative externalities, whether socially, economically, or politically of any one product or tool, but it's good to try. I think in that regard it's good to think about deepfakes.

Another thing that's important regarding the future of this problem—and I think part of why I'm here today—is to think a little bit about China, a place where, for all the problems we have in the West I take great comfort in the fact that we can talk about these problems and we can talk about our skepticism or our optimism about AI.

China is not a place where we have the same kind of free discourse about these topics, and I think that's quite concerning. In addition to that socially circumscribed area of discussion, you have a total lack of data policies, a real gray area between the public and private sectors, by which I mean the government is quite involved at high levels in big tech companies, and a lack of good data policy and a lack of concern about data privacy, and I think these things really contribute potentially to a government that has the ability to tell companies how to do their business and has access to a huge amount of data, over 1 billion people on Earth, and the people that they talk to if they're using WeChat.

We have theoretically a delineated zone between the public and private sectors here in the United States and in the West where people are thankful that the government doesn't have access to all of Google's data. It's hard to say if that's the case in China, but I think it's reasonable to be skeptical and to assume that there's some sort of exploitation going on there.

How that relates to future disinformation I think is critical. Data pools help inform AI algorithms.

DEVIN STEWART: Before we turn to your case study of Taiwan, I want to ask you a very tough, maybe controversial question because it sounds like you're sort of dancing around this question, or maybe I'm just perceiving it and imagining it, but Huawei.

Huawei is in the news all the time, installing 5G all over the world as an upgrade to current legacy equipment. The big question out there that I can't get an answer to is to what degree if at all Huawei might be helping the Chinese government obtain intelligence or other things. Do you have any thought on that?

NICK MONACO: I don't know anything more than you do by any means, but I think it's very reasonable to be skeptical and reasonable to ask those questions. Governments even here in the States are incentivized to have as much access to companies' data as they can. We saw this with the battle that went down with the San Bernardino phone that was encrypted and locked and the Federal Bureau of Investigation (FBI) tried to go to trial to get access to that data.

That trial didn't end up happening. The FBI said: "Oh, we got access to the phone. We don't need to go to trial about it." But that's just a good illustration of the fact that in a place that is, let's assume, devoted to human rights and democratic principles a place is still incentivized to get access to as much data as it can.

In fact, Michael HaydenJames Comey was heading the FBI at this time—the former head of the National Security Agency (NSA) and Central Intelligence Agency (CIA) at the time said, "I don't think it's right what the FBI is doing, but if I had that job, I'd be doing the same thing."

DEVIN STEWART: Of course.

NICK MONACO: So there's this dissonance even within one person who has a great amount of knowledge about both the inside and outside circumstances.

DEVIN STEWART: That's the way bureaucracies work. They try to do the maximum they can do.

NICK MONACO: Right. So this battle is tough even in Western liberal democracies.

But with Huawei we can reasonably assume that big companies are pressured to give the Party access and the government within China access. I think it's also reasonable to assume they probably don't have a lot of ground to stand on to oppose. It's a big question that we shouldn't stop asking, and I think that the ramifications are truly global in nature and geopolitical in nature.

A thing that's interesting is that one possible scenario is that China really gains a soft power it has not historically had through production and promotion of essentially overtly authoritarian surveillance technologies. There's a history of autocracies collaborating and/or sharing information on using oppressive technologies. There were Egyptian telecom companies that helped North Korean telecom infrastructure companies set up their systems within North Korea. There are all kinds of examples of this in the past, so I think it's reasonable to assume that Huawei is under pressure.

It's conjecture to figure out how they're responding and what's going on otherwise, but I don't think it's hysteria on the part of Western governments to be expressing the things that they're doing. I think it's entirely reasonable. I think it's necessary. I think it's a good precaution.

DEVIN STEWART: And it's in the news almost every day these days.

NICK MONACO: Absolutely.

DEVIN STEWART: The Financial Times and Wall Street Journal both ran stories on this just in the past week or so.

NICK MONACO: Another interesting thing on this topic is China's role in hardware production in general. There were some journalists at Bloomberg that did a really great story in 2018 about microchip production in China, essentially revealing that perhaps there is a hardware vulnerability in these chips that are produced by China that allows great access to data within even American companies and potentially intelligence agencies. Companies have been fairly uniform and fairly insistent in their response that this hasn't happened, but it was also in Bloomberg. It got a lot of attention. The journalists who made the story are very thorough in their work.

I think this is something that's going to be of increasing importance geopolitically as time goes on. It's not going to get less relevant. As you said, I think Huawei is going to be in the news more and more. I don't think it's going to go away. To continue to be aware and ask those questions is important.

DEVIN STEWART: Thank you, Nick.

Ending on your Taiwan case study, I think your study had the expression, "digital democracy meets automated autocracy." I really like that. It's a nice turn of phrase.

NICK MONACO: Thanks.

DEVIN STEWART: Essentially examining China's disinformation campaigns and their impact on democracy in Taiwan. Your case study was looking at computational propaganda in Taiwan. You didn't get to talk about that in your PBS show, so now's your chance. How did you set up the study, and what did you find out?

NICK MONACO: Thank you for the compliment on the title. What I'm getting at there is I think these are really two places that have a common history and culture to some extent and have really two fundamentally different visions of how technology can be used to shape their societies.

In China, as I've described at length already and as is epitomized in pretty much everything going on in the Northwest region of Xinjiang, you have a place where the government is basically dreaming a digital dictatorship, a place where technology takes data from citizens and informs an autocratic structure that oppresses them.

In Taiwan, on the other hand, you have a really strong civic tech movement. Places like g0v (gov zero) and the Open Culture Foundation are NGOs that promote use of technologies to enhance democratic and open data policies. Audrey Tang, the digital minister of Taiwan, is someone who is really progressive and again is in Tsai Ing-wen's cabinet promoting these kinds of policies. In some ways it's the most progressive country on Earth, promoting technology for the good of democracy.

So you have these two opposite visions of how technology can be used to enhance democracy or enhance autocracy.

In the case study we looked for signs of this kind of stuff. As I've told you already at length, political bots have been around for a long time, so we looked at the digital sphere within Taiwan to see if there was evidence of political bots being used and also just to get informed about computational propaganda within that sphere of cross-strait relations.

What we found was that there isn't very much evidence so far of automation being used for propaganda or for messaging within that sphere, but there do seem to be coordinated messaging campaigns that seem to be mostly emanating from humans. The example we explore in the paper is a messaging campaign against Tsai Ing-wen when she was elected in January 2016 in which a group of citizens, presumably real people, got together on Tieba, basically a form of Reddit that's in China, and said, "All right, we're going to jump over the wall"—the Great Firewall—"at this time, and we're going to message pro-China messaging, pro-unification messaging on Tsai Ing-wen's Facebook page." And they did.

This post that got attacked, one of Tsai Ing-wen's posts, had substantially more messages and comments on it than all the other posts on her page. These posts were promoting the Eight Honors and Eight Shames, some socialist principles that Hu Jintao penned in the 2000s, but basically the idea is they're promoting unification. They're saying Taiwan isn't a country, your party isn't legitimate, and we'll be unified one day. Not much of it was acrimonious, but it was definitely pro-China, pro-unification-type stuff.

There have also been incidents of organized trolling, which while it isn't necessarily emanating from the state or provably emanating from the state, certainly seems to hew to the state's line and promote things that are advantageous to the state. Ursula Gauthier, who I talk about briefly within the paper, was a victim of a fairly organized trolling attack that occurred on Facebook and on other platforms and eventually resulted in her losing her press credentials and having to leave the country and go back to France, which was a really unfortunate event.

To wrap it up, I guess the study shows there's a big human element to messaging and pro-China messaging online.

Interestingly, as I was telling you before the interview, Craig Silverman at Buzzfeed yesterday published an article saying that essentially similar messaging seems to be happening on Reddit. There's a lot more pro-China commentary going on Reddit that is essentially trying to bury anti-China comments and promote pro-China comments, and it shows signs of being human. I think there's an anonymous source who claims to have been part of these operations in the article. Bill Bishop from Brookings is quoted in the article at length several times saying this is something that seems to be really ramping up and swelling in the past year in particular.

So I think what we can expect, especially with regard to Taiwan in the future, is very human, not very automated messaging because as I highlight in the paper I think this is an issue that most mainland Chinese feel fairly uniform about, and interestingly they're encouraged to express their true opinion about. Conceivably, they truly feel like this. If you've ever talked to someone from mainland China, they're fairly convinced that Taiwan is a part of China and there's nothing to talk about, but they're happy to tell you that.

So this is an interesting place where Chinese citizens are encouraged to say whatever they want because they probably have the same opinion on the matter. Essentially you can outsource that propaganda to real people and encourage them in certain situations to express their opinions, attack Taiwanese politicians online. Really interesting stuff. Essentially all the evidence points to coordinated human messaging around pro-China themes within this study that we did about two years ago.

I think there's a lot more good research to be done. Platform penetration is an important question. How people use the Internet in Taiwan isn't the same as, say, Mexico. Facebook is very popular in Taiwan, but Twitter is not. There's a local sort of Taiwanese Reddit called PTT that I think is probably a great place to do more research on these topics.

China, of course, is politically relevant to a lot of countries on Earth and not just Taiwan, so as with Huawei and the stuff we talked about before, I think this is something that is only going to be more relevant as time goes on, even for people who aren't necessarily China hands or China watchers.

DEVIN STEWART: Nick, final question. You mentioned throughout the past 40 minutes or so that we've been chatting specific takeaways that you hope people hear when you talk to them about this issue—I think you mentioned two or three at least. Is there one final one that you want to make sure you get in there before we wrap up?

NICK MONACO: That's a great question.

I think the main ones are this is a problem with a long history going back to 2000 or so and definitely 2010. The problem of digital disinformation is certainly informed and affected by other digital rights issues, so data policy, data privacy. I think it's important also to take a breath and realize that there is some ground for optimism here. We're definitely not in the same situation we were a few years ago. People are more aware of the problem. A lot of really smart people are working on these problems in these companies, outside of these companies, academic researchers, people in civil society and the private sector.

It's good to appreciate that awareness. That's something that might be a little more evident to someone who has been in the business, so to speak, for a few years than not.

But things like HTTPS, a secure HTTP protocol that encrypts web traffic by default are the kind of big changes that really affect digital rights and digital privacy that I think get taken for granted. Essentially in the 1990s you could sit in a café and just read what everyone was doing online from your computer if you had the savvy to do it. Now that's not the case because of HTTPS. While this is a change that affects privacy more than disinformation, I think it's a good example of how things can really change overnight for the better once you get enough people behind these problems to think about.

So, yes. Not a new problem. We're not in the same place we were a few years ago. And discourse and data policy and privacy, all these things are relevant to disinformation.

Once again, I've given you a very verbose answer, but yes, I hope those are some good takeaways.

DEVIN STEWART: That's great.

Nick Monaco is affiliated with Oxford Internet Institute as well as Graphika, and he's based in Washington, DC.

Nick, thanks for telling us all about computational propaganda.

NICK MONACO: I'm happy to do it. Thanks for having me.

You may also like

Logo of Sputnik multimedia news agency via <a href="https://commons.wikimedia.org/wiki/File:Sputnik_logo.svg">Wikimedia</a>.

MAY 8, 2017 Article

The Grey War of Our Time: Information Warfare and the Kremlin's Weaponization of Digital Russian-Language News

"I argue that from 2008 to 2014, Moscow improved its ability to capitalize on the benefits of digital news—namely the unlimited publication space of digital media—...