Mapping the Impact of Data Fusion on Freedom, Security, and Human Rights

Feb 14, 2024 57 min watch

Today, communities are experiencing the effects of the widespread adoption by law enforcement of data fusion technology: automated software for correlating and fusing surveillance data from a growing web of sources. Though this technology has received scant attention compared to other novel forms of surveillance, its civil liberties implications are grave.

This virtual panel discussion explores the impact of data fusion and examines critical ethical questions around its development and use. This panel was moderated by Carnegie Council Senior Fellow Arthur Holland Michel and featured an exclusive unveiling of a new educational tool to map the effects of data fusion.

Impact of Data Fusion Spotify podcast link Impact of Data Fusion Apple podcast link

ARTHUR HOLLAND MICHEL: Good evening, everybody. My name is Arthur Holland Michel, and I am a senior fellow here at the Carnegie Council for Ethics in International Affairs. I am delighted to have you all here with us today to discuss a topic which I believe has not received the attention that it deserves and which it is certainly likely to increasingly merit in the years ahead. I am referring, of course, to the automated fusing of surveillance data. If you are not quite clear as to what I mean by that, that is quite okay. That is exactly why we are here today.

I am very excited to be joined for this discussion by Eleni Manis, the research director of the Surveillance Technology Oversight Project, and Chris Gilliard, Just Tech fellow at the Social Science Research Council, both of whom have had a tremendous impact on how we think and talk about surveillance. Rather than list all of their many accomplishments, let me just say that their actions on this topic are one of the very rare things that actually get me out of bed in the morning and keep me hopeful.

With that, just to say, hi, Eleni, hi, Chris. It is great to have you both here.

CHRIS GILLIARD: Hi. Thanks for having me.

ELENI MANIS: I am glad to be here.

ARTHUR HOLLAND MICHEL: Before we delve into the discussion itself, I want to share something that the Council is very, very proud to be unveiling today. A little backstory: Last year we brought together a crew of leading experts in surveillance issues to think about the issue of fusion, and this is the outcome of that work. Essentially this is an educational tool that is designed to bring the public up to speed on what is actually meant by the term “data fusion” of surveillance data, how it works, and what its implications are.

What I think this tool does very well is connect the activities that all of us may engage in on a regular basis, activities like owning a car, using public transport, or participating in political actions with risks of undue government intrusion that are greatly enabled by this technology. It sort of takes you step by step in a very visual vocabulary through what this technology does and what it means for all of us. You can explore this tool yourself—it is now freely available—and use this as a springboard for the “ethics forward” discussion that is so desperately needed on this topic.

With that said—and we may refer back to the tool over the course of the discussion—I would like to turn to Chris and Eleni.

First, to Chris, one of the issues, as I mentioned, that we have identified at the Council through our work on this topic is that a lot of people really do not know what data fusion is, so I was wondering if to get us started you could just describe what we mean by that term.

CHRIS GILLIARD: The mechanisms of surveillance in our society right now, and they range from things like automated license plate readers to mics that purport to be gunshot detectors, bodycam footage from police body cameras, and also doorbell cameras that people have on their front porch. Fusion is briefly understood as the collection or agglomeration of all of that data sent to one place, often metropolitan police departments and places like that, a central location, and often there is some form of machine learning tool, often proprietary, that runs on this data, sometimes in real time, in order to sometimes predict or often make assumptions about criminal activity. There is some discrepancy on the count of how many of these things there are. I have seen anything from 135 to 250 across the country, but it is vastly growing as we speak.

ARTHUR HOLLAND MICHEL: Eleni, Chris gave us a rundown of all of the other individual surveillance technologies that populate our daily lives. I was wondering if you could give us a sense of why the fusion of these technologies or the data that these technologies generate is different and particularly why the automated fusion of this data through, as Chris mentioned, algorithms and machine learning, is a step beyond what we already are very familiar with.

ELENI MANIS: Let’s start with a baseline for that and talk about individual surveillance tools because a single one is enough to do serious damage. A reverse keyword search can reveal everyone who searched for “abortion” in a state that criminalizes that care. Body cameras present a police view of events, when the footage is available at all. Individual surveillance tools are pretty terrible.

What happens when we fuse their data? What is worse? I think what Chris has already suggested is that over-policing is intensified. When we consider how that happens I think one thing we need to pay attention to is how multiple data points that seem to point in the same direction can seem to add up to good evidence, and it is not true. Piling bad data on top of more bad data does not produce good evidence.

The best example I can think of is actually not automated, but I will get there. I was thinking about gang databases. In cities like New York the following can add up to being registered as a gang member: wearing the wrong clothes, living in the wrong neighborhood, being friends with people in your neighborhood, being associated with them on social media, hanging out with people in your neighborhood, or even just being fingered by a so-called “reliable source,” whatever that means. That is all bad data. Sometimes a baseball cap is just a cap or hanging out with kids on the stoop is just hanging out with your neighbors, but when police fuse data the teenager in a hat, in a certain neighborhood, on a stoop with their friends sometimes can all seem to add up to evidence of gang membership, and it is just not true.

All of this is worse when the data is being fused automatically. Part of it is automation bias; we are inclined to trust decisions that are spat out, supposedly objectively, by a computer, but even minus automation bias, think about it: Who wants to be the officer who sticks out their neck and says, “Hey, this predictive policing tool is just sending us back where we have always gone?” Even minus automation bias, once we automate tools it becomes difficult to press up against their decisions.

Here is a case of automated fusion: You may well have a predictive policing tool that takes previous arrests, ShotSpotter alerts, and known gang locations—whatever that means—that suggests that police continue to police the areas they have always policed. Those look like many independent data points, but they all represent the same thing, which is an historic pattern of policing the same neighborhoods. It does not add up to good evidence, so I would say that is one of the key differences when we talk about data fusion.

ARTHUR HOLLAND MICHEL: I would imagine that some people in the audience will be thinking that the types of correlations that we are talking about are the kinds of things that detectives and police have been doing for a very long time. You have the classic instance in a movie of a detective who has a kind of pin board with all of the pieces of evidence and scraps of paper and pictures of people who may or may not be involved and strings attaching them together. That is in one form or another a form of fusion. At the end of the day, detective work is putting one and one or two and two together to come up with some kind of conclusion that is more than the sum of its parts.

I want to drill down with both of you as to what is different when you automate that process, or, even to take a step back from that, what is different when you fully digitize that process?

CHRIS GILLIARD: For me, I think one of the biggest ways to think about it is that it turns even the most minute aspects of everyone’s life into data, and all of that data becomes police data.

Eleni mentioned abortion. I think a lot of this can be understood through the lens of the Dobbs decision. We have seen states such as California try to put limits on how much of what they call “sensitive data” is collected or transferred out of state. It turns out that with the right—or wrong—kinds of inferences that tech companies do all the time, all data becomes sensitive data, and something that is seemingly innocuous, like what you buy when you go to a drugstore or how you have changed your diet in the past four weeks—I could go on—becomes sensitive when you are running processes on it in order to glean some kind of information or make some kind of inference, so the idea that all data, not only what we do in what is understood as public, our movements and things like that, but what we search, who we talk to, and, and, and, the idea that all of that becomes police data and then fodder for running operations on to me is scary, but I think that is the most onerous part that often is not maybe understood.

I would like to again briefly say it: If all data is police data, I think that is a short step to a very authoritarian society.

ARTHUR HOLLAND MICHEL: I want to flag very briefly something you mentioned that I think is important to note, that fusion is not a practice that is unique to the law enforcement domain. In fact, a lot of these processes and logics of correlation have actually been thoroughly tested and proven in, for example, the online marketing domain, in figuring out how to serve people ads.

It is actually a good way for our purposes in trying to inspire dialogue and understanding around this topic to look at some of those examples because they can resonate. One that I will very quickly give is that I heard from someone who said that certain companies, if they get one data point that it is a Sunday and another data point that a person has not moved, as in their phone has not moved in some period of time, the fusion of those two data points will suggest that they may be lying on their couch, perhaps with a headache, and that it would be a good opportunity to serve them with an ad for some home food delivery.

I just put that out there to give an example of how we are already dealing with a lot of this but also as a way to comprehend that two plus two may equal more than the sum of its parts.

With that brief digression, I want to turn back to Eleni as to what these unique risks constitute beyond those we have already mentioned.

ELENI MANIS: Let me speak to Chris’s point, which is that all data is sensitive data, all data is police data, and to your point, which is that there are commercial entities that are making data fusion worse. Utilities these days install “smart meters” that are intended to help customers reduce their usage and save money, and it helps the utility avoid a blackout or too much demand during peak times. It sounds good when it works; it sounds pretty benign.

However, in late 2021 we learned that Immigration and Customs Enforcement (ICE) was weaponizing utility data. They got it from Thomson Reuters CLEAR, a commercial surveillance product, that incorporated people’s payments to utilities—gas, electric, and cable—and ICE was able to figure out the profiles of individuals, who was home, when, and how many people were in the household, and use that data to apprehend people. This even happened in sanctuary jurisdictions like California, where local police are not in a position and do not want to help a deportation of their residents. I absolutely concur with what you said.

You had also talked about detective making a pin board. When we imagine who is worth making a pin board for—it has to be a pretty serious crime to have a detective on you setting up a pin board, but who is worth pushing a button and telling an automated system, “Tell me all about this person”? Maybe just somebody you stopped on a corner.

One of the concerns that I have about data fusion is that it can be used to profile people who are under no individual suspicion of a crime at all: “Automated system, tell me who is likely to have sought an abortion by state, and do this based on information about who drove across state lines, parked across the street from a clinic in a garage, and meets certain demographic criteria.” The efficiency with which automated tools can be used poses a risk that a detective with a pin board never did.

ARTHUR HOLLAND MICHEL: Your point of accessibility I think is a very important one. For the purposes of anyone who was in New York City who has survived this morning’s snowstorm, it is worth noting perhaps that the New York Police Department (NYPD) has one of the most sophisticated fusion programs in the world. It is called the Domain Awareness System, and it is available on every single police officer’s iPhone. It is quite eye opening when you think about what that potentially does to the threshold for investigation and for considering a person a suspect.

In the tool, we identified three categories of risks that can emerge from the use of these systems, one being that people will be suspected of crimes that they had absolutely nothing to do with, this notion that sometimes correlations can make something to seem to be what it is not.

The second category of risk is that the systems can in fact be very, very effective at finding people who have indeed engaged in a certain activity. That is problematic when governments criminalize activities that are actually just the exercise of fundamental rights.

The third one is that often these tools are acquired for purposes that a majority of the population will be able to very much get behind, like pursuing very violent offenders. An example that is often given is finding children who have been abducted, but then because of the accessibility of these tools they end up being used for targeting the unhoused population, people who are jaywalking, or people who are putting their trash in the street.

Those are three very, very clear risks that I would encourage everyone to explore through the tool, but I want to go over to you, Chris, and ask about what some of the corollary risks might be. If there are ways that the existence of a fusion tool within an organization creates unseen risks and changes, perhaps the logic of—I do not know how data is treated, collected, or why it is collected.

CHRIS GILLIARD: Absolutely. Eleni mentioned automation bias, and I think even without fusion—I am based in Detroit. Detroit is, among other things, the home of three of the most high-profile instances of people being arrested in part because of some faulty facial recognition identification.

One of the most noteworthy parts of this is that police came to a guy’s house, arrested him in front of his family and his children, took him in, he was accused of stealing expensive watches, and at one point they showed the guy the picture that they said was him. He said it was not him, it was not him. The law enforcement officer made a mistake and said, “Oh, so I suppose the computer is wrong,” indicating that because facial recognition said it was him it was their belief that it must be him.

Part of what comes to mind when I think about this is that that is with one data point, one piece of information. I think about the extent to which the kinds of overwhelming what would seem to be evidence that could be marshaled against people with systems like this.

I say “could be” as if it is hypothetical, but we have already seen lots of examples. In New York we have seen the surveillance of Muslims post-9/11, we have seen people riding their bikes be accused of robbery based on geofence warrants, and things like that. I think it is important to point out that this is not hypothetical. We already have plenty of examples, and a lot of those examples, yes, are based on one or two streams of data. The potential “swarm” of information, as we call it in the tool, that could contribute to these things is something worth paying attention to.

ARTHUR HOLLAND MICHEL: Is there risk that the existence of this tool creates a kind of feedback loop where, if you have the capability to fuse disparate data points across different data modalities—video with location with textual records—that that will create a logic within an organization that one should always be collecting more information because it could be useful within the context of these other modalities? Is it likely? Have we seen organizations that have a tool of this nature expand their data collection practices? Would either of you like to jump in on that possibility?

ELENI MANIS: I remember a fusion center director—fusion centers are places where local, state, and federal law enforcement collaborate and share data—saying, “When we have money, we go knocking on doors and say, ‘Hey, can I buy this data set?’ because you never know what might be useful.” I think there is a certain greediness for data.

CHRIS GILLIARD: This might seem like an inept comparison, but I think in many ways law enforcement can be very much compared to companies like Facebook and Google in that there is a fairly voracious appetite for data. The idea is that more data will equal more results. I personally do not think that is borne out.

Another thing that is discussed in the tool is the idea—I don’t think a lot of people have reckoned with the idea of what “perfect” law enforcement would look like. That would make for a very difficult society to operate in.

ELENI MANIS: Chris, I want to circle back to something else that you said, and I hope it is not out of place now. You talked about false positives and facial recognition and how they pose a problem for the few unlucky individuals who were pulled in by police because of a false positive.

It occurs to me that we all lose when everyone is a permanent suspect, when all of our data is being searched over and over and over again, and all the more so for over-policed communities, where individuals are more likely to be in facial recognition galleries or other police databases and more likely to be searched over and over and over again in case they were possibly involved in a crime.

CHRIS GILLIARD: You talked about gang databases. One of the things that you omitted—I am sure you know, but I want to make sure everyone else knows—is that there are often children in these things as young as I think four or five.

I often joke—there is not a lot of levity in this discussion—that if there were algorithms, predictive policing, or fusion centers for wage theft or white collar crime I might talk about this differently and think about it differently. But, yes, often these resources are marshaled against already over-policed and the most vulnerable populations.

Given the movement of society at this point—we mentioned abortion—I think about trans folks, who again are often just seeking medical care or seeking to go about their business unmolested. There are many states, many politicians, and many law enforcement officials who are trying to make that an impossibility, and these tools are the kinds of things that would ramp up those techniques.

ARTHUR HOLLAND MICHEL: It makes me think about how it seems these technologies can vastly expand the web of suspicion. If you are in one of these powerful tools, then the tool will be always in some way actively seeking to associate you with a potential crime because that is the purpose of these tools.

I was very shocked to find out when as part of the reporting for a story that I worked on a few years ago for Wired I got my hands on one of these tools from the NYPD mobile phone fusion system. Anybody who had made a 311 call or was named in a 311 call—for our non-New York City audience, a 311 call is when you call up about some leaves on the street that have not been cleared or there is ice or any city living administrative stuff that does not in any way rise to the level of criminal consideration, you call 311—all of that was recorded in the fusion system, again because it may implicate you in some way in a correlation that could be useful for their purposes. I just wanted to flag that. That was something that stuck with me.

A big question that also keeps coming to me over the course of this conversation is where this is all going. What do we expect to happen from here onward—I am not even going to talk about the long-term future because that is a big, black hole, but even in the near term? I would love both of you to offer some thoughts on this and if there are any recent developments that maybe give a sense of where things may be going.

ELENI MANIS: I will share a tiny bit of good news. I was very pleased to see last month Amazon cut off police access to ringing doorbell footage. When police have access to footage from people’s neighborhoods and neighbors from people’s homes it extends the long arm of the police. That is the kind of fusion that we do not want to see, police having access to people living their daily lives.

Overall, my colleagues and I are concerned. In the past year and a half bans on abortion and gender-affirming care have skyrocketed. That means that fused data can be searched to identify people who are seeking criminalized healthcare, helping their kids get criminalized healthcare, who googled abortion clinics, and, like I said earlier, who crossed state lines, parked across the street from a certain clinic, you can put that data together and profile people who are seeking healthcare or helping their kids get healthcare or who are healthcare providers, and I worry about fused data being weaponized. I think it is very important that we start siloing data, for example, that California’s license plate data does not make its way across state lines.

CHRIS GILLIARD: I think there is a bit of a tug of war right now. There are states that have some notable privacy legislation—California, Illinois—and I think there are lots of groups and individuals in communities who have strongly asserted that it should be up to them to dictate what constitutes safety in their neighborhoods or in their communities, but I think federally, aside from a few isolated legislators, we have not seen any movement that in any way is going to curtail or put limits on these systems.

There is a bit of good news and a bit of bad news. I am not particularly optimistic at this point because I do not think that, again other than a few outstanding legislators, there is the necessary will on the part of the federal government to do anything to slow this down.

ARTHUR HOLLAND MICHEL: Something that has not yet come up is that these tools do seem to operate in something of a legal “blind spot,” if you will. There are not specific rules for fusion technologies.

CHRIS GILLIARD: Absolutely. Again, I think the distinction between what constitutes public and private makes things difficult. As Eleni alluded to, often the things that law enforcement may not be able to do on their own, they can easily go buy in terms of data that they want on a segment of people or individuals.

I may catch a little bit of heat for this. I think the Amazon thing that Eleni mentioned is important, but the opposite side of that coin is that because of the way doorbell cameras have now become almost ubiquitous and because of fusion, now the police do not need to go to Amazon or Google to get the information; they can just have the doorbell footage sent directly to them. Again, this typically needs the consent of the person with the camera, but a lot of people have very much absorbed the narrative that surveillance equals safety, so often that is not a trick to get people to consent.

ARTHUR HOLLAND MICHEL: Though not indeed the consent of the people who may find themselves on that person’s porch.

CHRIS GILLIARD: Precisely.

ARTHUR HOLLAND MICHEL: This may be a good moment to jump over to some of our questions. I am going to read a handful of them out, and our esteemed panelists can address multiple questions at one time.

Our first question was about the data point that Chris gave early on about their being 135 or something right now and growing around the country. I can address that very efficiently. That is about the number of fusion tools that seem to be in use by police departments.

CHRIS GILLIARD: Correct.

ARTHUR HOLLAND MICHEL: Another question is one that often comes up, and rightly so, in events of this nature, which is about whether there are any tools or best practices that can help limit the data footprint that we all leave behind in modern life.

Another question is about the international side of this, whether this is something that is uniquely happening in the United States or is most advanced in the United States or whether these practices are very widespread.

I will just throw a couple more at you and again invite the audience to keep asking questions. Is it possible, an audience member asks, that we will reach a tipping point in terms of people’s tolerance? Does it feel like we could reach a point where people start pushing back as a result of the sorts of practices we are seeing here today?

Finally—and we will wrap up the questions here for this round—another question is premised on this notion that many criminals will purposefully go to other states after committing crimes, and whether there are any alternative suggestions for capturing multi-state felons or other dangerous persons who are willfully evading these practices and perhaps siloing in that way?

I know that is a lot. I will hand it over to Eleni first, and then we will go to Chris.

ELENI MANIS: I will take the very first question because it makes me angry. Are there tools or practices that we can use to protect ourselves? Absolutely. If you search for cybersecurity training and you look for a group, perhaps an over-policed group, that you are a part of—you are a protestor, a member of a religious community, or seeking criminalized healthcare—you can find excellent resources online, but you should not have to. The only solutions to our privacy problems are at the policy level. We need ideally federal action on privacy. Yes, you can do something, not everything, to protect yourself, but it is wrong to ask individuals to change an ecosystem that is so much oriented toward surveillance.

CHRIS GILLIARD: There is a widespread movement to use these tools, and I think we often, in a very detrimental or discriminatory way, talk about the Chinese Social Credit System and things like that in ways that are unfortunate. Part of the reason I draw attention to that is that in many ways the economy of how we deal with data and surveillance is comparable; it is just that it comes from disparate sources. There is a story out recently about the United Kingdom putting real-time facial recognition in the subways. All over the world there are examples of this. I have said a lot of things that are gloom and doom, but I also do think that there are tons of communities and activists who are pushing back on these tools.

I want to touch on the last question. I think it is important to not dismiss the potential for danger and harm that exists in our society. I am trying to answer this in a delicate fashion, but in a society that is supposed to prioritize freedom of movement, freedom of association, and is supposed to prioritize civil rights—again, this is mentioned in the tool; I want to keep referencing that—it is important to strike some balance because there is no such thing as perfect safety. If you live in a free society, that is a thing we mostly understand, but the false promise of a lot of these tools is that they will deliver perfect safety if we have perfect surveillance. I do not think that is the case.

As you mentioned, Arthur, the level of creep with a lot of these tools is immense. The idea that we would use these tools to stop terrorism, eliminate child sexual abuse material, and contain or arrest people who abduct children and things like that, almost no one would disagree with those uses, but when we are using it for all of these other purposes along with widespread narratives about the rise in violent crime—often not true and often promoted by companies who have something to gain by exhibiting that false narrative—I think it is important to push back and think about that balance that we need to have if we want to live in a free society.

ELENI MANIS: To add to Chris’s point, many surveillance tools simply don’t work. Chicago just canceled a ShotSpotter contractor or will let it expire. It spent tens of millions of dollars on a system that wastes officers’ time, that mistakes cars backfiring and fireworks for gunshots, and that brings armed police into neighborhoods and sometimes results in tragedy. So we can deprive police of expensive surveillance and data fusion tools without depriving them of good tools that can actually help promote public safety.

ARTHUR HOLLAND MICHEL: I want to go quickly to your responses to the question about pursuing violent criminals and just ask or perhaps layer onto that whether it is important—it does not sound like, and I am certain you are not advocating for states to not pursue violent criminals, but what you are advocating for is that one should not engage in these activities in the total absence of any guidelines, restrictions, and criteria in the same way that we agree that people who engage in violent illegal activities should be tried for their actions but that there are very strict rules about how a person is given due process. Is that a fair characterization of your stance on a consideration like that?

ELENI MANIS: Public safety is not catching violent criminals; it is funding public schools, ensuring people have jobs, ensuring they have public transportation to get to those jobs, ensuring that we do not have food deserts, and taking care of health risks in underserved neighborhoods. Public safety does not mean funding and over-funding police. Like I said, we reject the framing.

CHRIS GILLIARD: I would mirror what Eleni said, but I mentioned this before: I come from the Detroit area, and I have seen many of the effects of some of these policies of underinvestment or abandoning of a lot of the programs that Eleni talked about. I have seen the effects of runaway surveillance and rampant over-policing. I have been victimized by it myself, so I do reject the binary that we can either have these tools to the degree that we have talked about them and that are covered in the tool or we can have a Mad Max society. I do not think that is the case.

Again, to go a point I made earlier, these tools are often marshaled against the most vulnerable. They are often used in ways that are not part of the promotional materials that are discussed when they are pitched. Finally, in a lot of cases if they made society as safe as the claim is, for instance, in the United Kingdom, which has more surveillance and closed-circuit television than almost anywhere, it would be one of the safest places on the planet. If they worked in that way, then we would not constantly need more. The evidence that a lot of these tools work in the ways we are told is often scant, whether that is bodycams, automatic license plate readers, or doorbell cameras.

ARTHUR HOLLAND MICHEL: I have a question here that dovetails nicely with this discussion, which is whether the goal or solution to what we have been talking about at a policy level is abolition of these tools. Our esteemed audience members asks: “Does this Pandora go back into its box after police or federal agencies have had their hands on it?”

I might add to that, what of these risks if part of that is this notion that there is an absence of rules? I want to get a sense of whether there is a feeling among my panelists as to whether there are policy instruments perhaps short of abolition that would address some of these concerns or not.

With that said, I do want to bring in an additional question that has come in. Similar to how there is no such thing as perfect safety, is there such a thing as the perfect technology, where its purpose is to solely serve the ethical needs of society, or does technology inherently involve violations of privacy? That is a lot to hand to you, but maybe we can start with Eleni to go to these two questions as we enter the homestretch of our program tonight.

ELENI MANIS: I am not sure I can answer those questions in theory, but I want to acknowledge [the questioner's] point that public safety does include law enforcement, sure, but law enforcement as we know it in the United States has a history of racist, biased policing and of over-targeting certain communities, specifically low-income communities, Black, Indigenous, and People of Color communities, LGBTQ people, Muslim Americans, and protestors. Given this history, tools that operate on historic police data perpetuate the over-policing of these communities.

It does not help that much surveillance is conducted in secret. We talked briefly about fusion centers where local, state, and federal law enforcement agencies share data, set up ostensibly to counter terrorism after 9/11 at the same time that the Department of Homeland Security was set up. They were found not to do very much if anything to help counterterrorism efforts, but they did violate civil rights: They spied on protestors, they spied on Muslim Americans, they helped local police forces patrol for minor crimes, leading to many, many, many arrests of people on offenses as small as riding a bicycle in an unpermitted way. Surveillance tools used in secret by law enforcement agencies known to be biased is not a recipe for a tool that respects civil rights.

ARTHUR HOLLAND MICHEL: Before I hand it to you, Chris, I am going to throw in one additional question, and that is how data poisoning or obfuscation might affect these surveillance tools, specifically both deliberate and accidental poisoning or obfuscation of data, if you want to fold that into your comments.

CHRIS GILLIARD: I have long been an advocate of banning facial recognition, not only because of the possibility of it falsely incriminating people, but I think that at its root it erodes a foundation of how our society is supposed to operate. The foundation of society is supposed to be that if you are not doing anything “wrong”—and I know that is a loaded term—and you are not suspected of doing anything wrong, then you are allowed to freely move about society unbothered. That is how it is supposed to work.

Facial recognition erodes that along with many of these other tools, but in some ways facial recognition uniquely so. As you pointed out, Arthur, in some cases maybe these tools—and we have seen this with proprietary facial recognition tools—are often on individual law enforcement officer’s devices and things like that. I think that is a detriment to society.

To the other two questions, I do think surveillance necessitates a balance. There is not a perfect technology that is going to solve all these things. There needs to be some version of individual rights respected policy, technology initiatives, and things like that, but understanding that abolition of certain things should fall within the spectrum of how we deal with these things and how we move toward a more equitable and just society.

Without hearing about specific obfuscation and data-poisoning tools that the person who asked the question is referring to, I do not feel entirely comfortable answering that one, but, mirroring what Eleni has said, I am a little skeptical often of individual narratives about community issues or societal issues.

ARTHUR HOLLAND MICHEL: I have one last question for the two of you in our last couple of minutes. It actually touches on an audience question that just came, which refers to the fact that there are certain laws that exist that some of the practices or all of the practices that we have spoken about today may be subject to. I wanted to note that question because my question was, what are some specific actions that can be taken? What are things that either we can do or that can be asked of those who represent us in a concrete, tangible way to address what we have spoken about today?

I would also ask that you tack onto this response something that I always ask my panelists when we talk about topic like this, and it is the following: Today has been a very long and difficult day. A lot of us have been snowed on. We are now talking about admittedly some very difficult topics, topics that we are going to go to sleep thinking about. I just want you both to end by telling us one thing that you think should get us out of bed tomorrow morning, something that you think maybe is some cause for hope and optimism.

That being said, I will hand it over to Eleni, and we will then give the last word to Chris.

ELENI MANIS: The thing to get out of bed about is that we are doing this work like the Surveillance Technology Oversight Project and other groups and the Electronic Frontier Foundation, that are working to stop police abuse of surveillance tech. You are here, you are listening, and can help amplify the message, and we are so grateful for that.

I lost track of the first half of the question.

ARTHUR HOLLAND MICHEL: Policy actions, things that we can do or our representatives can do.

ELENI MANIS: You asked about my work in the New York City Mayor’s Office. At the time I was there, the city created the Office of Information Privacy. The agency was supposed to regulate each city agency’s collection, retention, and sharing of data, a disclosure of identifying information.

We have talked quite a bit about commercial surveillance data and how it can replace unfortunately much of the need for governmental data, but I think at the very least that public agencies should do their best to limit their collection and sharing of data and to protect their residents.

I do not want to see student incident reports being sent to police officers, much less to immigration agencies. I don’t think license plate data should cross lines given the current environment where we have states that are criminalizing abortion and gender-affirming care, so I think there is quite a bit that states and municipalities can do to protect their own residents, to shield their own residents, and to create sanctuaries where you can safely get healthcare or attend school as an undocumented student.

CHRIS GILLIARD: Enabled by tech companies and their abuses in the last 20 years, I think that a lot of laws that would prevent some of the things that we are talking about, or at least make them more difficult, because of the overall data environment are not quite meaningful.

Again, to speak to a point that was brought up a few times, ranging from foreign governments to law enforcement to individual bad actors, the mechanisms for people to obtain really intimate data about individuals or groups of people is very wide open in a way that is extremely detrimental and often allows people to circumvent laws that I think were meant to prevent that, but there has not been meaningful federal legislation passed in 20 years or longer than that now, but I feel safe saying 20 years.

What encourages me is I think a lot about what Eleni said about individuals and organizations who are doing a lot of this work. I am also seeing a strong response to a lot of this by young folks. Having grown up in an environment where there are radically altered expectations about what private means, what public means, and what data is radically altered than we experienced, ideas about telling your own story, creating your own narrative, deciding who you want to be and things like that, I think lot of young people have developed mechanisms for pushing back on these things that I think sometimes can be very inspiring. I am a professor, so I have seen a lot of that firsthand.

ARTHUR HOLLAND MICHEL: I am going to add to your excellent suggestions about what should get us out of bed tomorrow a third, which is that, at least for my purposes and many others at Carnegie and those we are working with, we have to get up tomorrow morning because we have a meeting to continue this work on fusion surveillance and what should be done about it. I say that not only because I need to set an alarm but also because we are going to be continuing this work, we are going to have more outputs, and I would just want to extend an invitation to our community to get involved in this discussion, to reach out to us, and to give us your input.

I think there is a fundamental belief here among us all that the more people who are talking about it—yes, these issues are very, very complex and the ethics are difficult—and the more people who are involved in this discussion the better and the stronger the outcomes will be, so please do reach out. We are always open for business.

My final order of business is to thank the two of you, Eleni and Chris, for your time today and for your extremely valuable input and insight. I encourage everyone to continue to keep track of their work and their outputs because it is truly at the heart of so many of these issues.

That is all from me and from the Council. Thank you very much for joining us, and we will look forward to hearing from you and seeing you all again very, very soon.

Data Fusion & Surveillance

Impact on Freedom, Security, and Human Rights

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

ChatGPT homepage on a computer screen

MAY 15, 2024 Article

Forecasting Scenarios from the Use of AI in Diplomacy

Read through six scenarios and expert commentaries that explore potential impacts of AI on diplomacy.

NOV 13, 2024 Article

An Ethical Grey Zone: AI Agents in Political Deliberations

As adoption of agentic AI increases, it is critical for researchers and policymakers to agree on ethical principles to inform governance of this emerging technology.

Left to Right: Eleonore Fournier-Tombs, Ambassador Chola Milambo, Ambassador Anna Karin Eneström, Doreen Bogdan-Martin, Vilas Dhar. CREDIT: Bryan Goldberg.

SEP 19, 2024 Video

Unlocking Cooperation: AI for All

On the eve of the Summit of the Future, Carnegie Council and UNU-CPR hosted a special event exploring the implications of AI for the multilateral ...