Facial Recognition, the Future of Privacy, & COVID-19, with Brenda Leong

Apr 14, 2020

In this wide-ranging talk, Future of Privacy Forum's Brenda Leong discusses the commercial uses of facial recognition technology, concerns about privacy and bias, how it's being utilized during the COVID-19 pandemic, and some tough questions about government surveillance. What's the future of facial recognition? How can we use this technology ethically? 

ALEX WOODSON: Welcome to Global Ethics Weekly. I'm Alex Woodson from Carnegie Council in New York City.

This week's podcast is with Brenda Leong, senior counsel and director of artificial intelligence and ethics at Future of Privacy Forum.

Brenda and I spoke about facial recognition technology, focusing on its commercial uses, privacy issues, and the COVID-19 pandemic. We also touched on bias in this technology, the role of the federal government, and some beneficial uses for facial recognition in the future.

For more on this subject, you can go to carnegiecouncil.org. You can check out last week’s podcast with Jameson Spivack from Georgetown Law's Center on Privacy & Technology, as well as a growing archive of podcasts on all aspects of artificial intelligence and big data.

For now, calling in from Northern Virginia, here’s my talk with Brenda Leong.

Brenda, thank you so much for taking this call. We scheduled this before the pandemic started, but it's good that we're getting to talk, so thanks for joining.

BRENDA LEONG: Yes. Thanks for having me. I appreciate the invite.

ALEX WOODSON: Of course.

I thought we would just start off with some definitions about facial recognition and some of the other terms that may come up during the talk, just to make sure that people who might not have been following this as closely as you are all on the same page as we go through this. Very basically, what is facial recognition technology, and what are some of the other terms that we should know?

BRENDA LEONG: I think that's a good place to start, if for no other reason than there is not always a lot of consistency or firm lines around different categories, so different people in different contexts use it differently.

In the way we talk about facial recognition at the Future of Privacy Forum (FPF) we have broken it down into the various ways that cameras interact with people. It doesn't always have to do with actually identifying an individual.

There is something called "facial detection," which is basically just determining if a human is present. There are a lot of use applications for this. It's how your camera focuses and puts the little yellow square around the image of the person in your photo, and it's also how various counters to determine crowd size and things like that work.

There is "facial characterization," which is also sometimes called "emotion detection." There are other terms for that as well. That is one that is getting a lot of attention recently. It doesn't necessarily identify the specific person but starts to make a lot of assumptions about a person, both the external things—are they male or female, are they tall or short, old or young—and also are they happy or sad and other behavioral characteristics based on it.

What is really more accurately called "facial recognition" has two categories: There is verification and identification. Verification is where you are asserting that you are a specific person, and the system is verifying that. That's what you do when you use it on your smartphone. Some facilities that you might use facial recognition to enter are doing that. So you are actively engaging in that exchange.

The true facial recognition that people tend to go to in their minds from movies and science fiction and the CSI shows and things like that is where you're taking an unknown image and saying, "Can I figure out who this person is by matching them against some sort of available database?" That would be the identification level, where it is truly a search of an unknown image to figure out who that person is.

All of those have obviously different risks, different capabilities, and different use cases, and it's important to know what you're talking about when you're making assumptions or expressing concerns about some of the impacts of those.

ALEX WOODSON: Definitely. I know something that you focus on is how facial recognition technology is being used in commercial applications. What are some of the ways that it is being used now, and maybe in the near future what are some things that we can expect as well?

BRENDA LEONG: At FPF our focus is really on commercial work. We don't get as involved in law enforcement or national security or things like that, although obviously there are not a lot of bright lines sometimes between all those.

In commercial applications it started obviously as an outgrowth of security cameras or security uses. There is a lot of intent both for physical security—access to facilities or to identifying issues in crowds like at big public venues, arenas, sports centers, and things like that. That's where people started to say: "There are already cameras. There is already a lot of video surveillance. Can we use facial recognition in a way to enhance our security and maybe target it more?" Stores might want to use it for trying to track known shoplifters or other issues.

Because that capability was there and from the desire to provide an enhanced customer experience, then there were a lot of use cases added to that that are not really about security but are about consumer experience, either added now or being considered to be added. Those might include things like easy access to a limited venue like a big box store like Costco or BJ's or something.

I'm not saying that they're doing this now, but that's the kind of environment where they might want to let you in just via facial recognition rather than having to pull out your card. Maybe it might be at a store where you have your frequent shopper card. Anybody can go in and shop there, but maybe they want to give you additional discounts or specials because you're on their VIP program or whatever it might be.

The hospitality industry is very interested in the capabilities of this, things like hotels, travel agencies, rental cars, and things like that that want to enhance the "frictionless" experience for people who might want to opt in to this and say: "Hey, I want to be able to walk up to the hotel or into the hotel, the system scans my face, recognizes me, and immediately sends to my cellphone a quick response code that gets me into my room or some other easy check-in or logs me into whatever other options I have asked for at the hotel, my spa appointment or a dinner reservation or something like that."

And that all gets coordinated in a very seamless and passive way for me just because I have opted in to have my face be the tracking for that; things at conferences, and again VIP experiences at sports stadiums and so forth, where you can get in a different line maybe and get in faster and easier via facial recognition or have access to parts of the stadium, wherever the special boxes are for viewing and things like that.

All of those are private-use cases. They're not tied to security, and they're not tied to any kind of public law enforcement tracking sort of thing. They are literally trying to enhance features and services for the consumer experience.

ALEX WOODSON: A lot of these might be somewhat noncontroversial uses of facial recognition technology, pointing to more convenience for the user, but what are some of the concerns that you would have about these uses of the technology? What are some of the issues that arise that you might not think of right off the bat?

BRENDA LEONG: That's a great question. I don't think there's any use of facial recognition that doesn't have some level of controversy around it just because there are so many people who are uncomfortable with that being a type of technology that they're subject to, sometimes without their awareness or understanding. Unlike most biometrics, where you have to present your eye for the iris or your finger for the fingerprint, you might walk through a facial recognition scan and never even know it. It's an extremely background, passive sort of system and that gives a lot of people a lot of concerns, and I think rightly so.

You will notice one of the terms that I used a lot in describing those was "people who have opted in" to have that feature as one of the ways of enabling the services that they're taking advantage of. That's very much our bottom line on that. That has to be an expressly opt-in choice on the part of the consumer, who says: "I'm aware that this is a service that's offered. I want to do it. I present myself to have my face scanned and enrolled in a database, and I'm aware that that's how these services are being delivered to me."

That's great for the people who want that, and it's great for the industry applications that it can facilitate, but it also then leaves open the entire scope of choice for people who want no part of that at all, either don't want to participate in that sort of tracking, or at least don't want it done using their face as the identifier. So one of the key things is that it needs to be an expressly affirmative opt-in for those sorts of systems.

There is no basis for claiming any of those systems are a necessity, whereas the government can say at border control or other places people can say: "This is going to be the default. You have to opt out of it. We're going to do this as the standard because of these other exculpatory security reasons or whatever." But in this case a consumer experience is never going to have that sort of justification, so it needs to be really clear that this is happening and that it's what the consumer wants to engage with on that. Otherwise, it's a little bit of a privacy violation. Even if it's not against the law, it's not in accordance with good privacy principles.

ALEX WOODSON: On the Future of Privacy Forum website you have a document about privacy principles for facial recognition technology and commercial applications. Some of those are consent, transparency, a few of the things that you have just mentioned.

Do you find that a lot of companies, a lot of organizations that use facial recognition technology are aware of these principles, or do you have to remind them of them? I'm sure they tie in with other applications that the companies can use. What has been the level of understanding that these are serious issues among big commercial enterprises?

BRENDA LEONG: I think at this day and time certainly most large companies of any kind, whether they're technology companies or not, are aware of consumer privacy issues. I think we're just at the point where privacy is a big enough issue, data tracking personal consumer information—you can't pick up a newspaper, you can't listen to anything; we're doing a podcast on it right now. It's everywhere, and these companies are well aware of that. They understand the responsibility they have to their customers.

Just like they have a responsibility in the security realm to keep that data secure using appropriate and safe practices, they also have the requirement to have good privacy practices. I don't think you would find many companies that are not only aware of it but have somebody—and sometimes multiple people—whose job is specifically to manage it. It could be operating out of the general counsel's office from legal compliance aspects, but it also usually overflows into areas that go for design, so if they're going to offer a new product or service or feature of some kind, they're going to consider privacy: "Are we going to have to collect more data to do this? In the case of facial recognition, we're going to have to create a database of people's facial images. What system are we going to use? How are we going to protect the data that more or less needs to be part of that process?"

We have relationships with many companies. We're supported by a combination of commercial industry support and grants and foundations. Through those relationships that we have with those companies, we have the opportunity to see and hear what their real-life experiences are and what their challenges are, and then hopefully to give feedback and policy suggestions on what that should be.

Those privacy principles that you mentioned were in fact developed in a not quite year-long program where we essentially engaged the whole spectrum of stakeholders for that in terms of the companies involve: the companies that actually design and develop biometric systems, facial recognition systems, the companies that sell, market, or maintain them as a service potentially, and then the businesses that have the consumer-facing aspects of those, whether those are retailers or online companies that are using facial recognition for photo tagging, or for your phone you might be able to access your bank and do financial transactions now with your face as your verifier. All of those applications generally we considered, and we engaged with those companies to get their input and thoughts, and that's how we ended up with the principles that we did. It was a combination of responding to what their particular use cases were, and then what our policy recommendations were, and achieving consensus on how that should look for a company that was going to use these systems.

ALEX WOODSON: One issue that I have seen come up a lot—maybe not specifically related to what we have just been talking about, but obviously there are some ways that it can relate to that as well—is bias in facial recognition technology, how lighter-skinned faces are identified more easily than darker-skinned faces, and sometimes women are identified as men.

How much of that is an issue as you see it right now? I would imagine that the technology just keeps getting better and better. What are your thoughts on that issue, maybe as it relates to these commercial applications or any other applications of facial recognition technology?

BRENDA LEONG: That's a really complex area, and we could spend the whole day talking about just that, but just to try to summarize some of the key points briefly: First, yes, the best systems are in fact getting better and better. The National Institute of Standards and Technology (NIST) in the Department of Commerce does evaluations on a voluntary basis. The company has to submit their facial recognition system to NIST for testing. Many do, certainly all the leading suppliers of these systems, especially those that are supplying to the federal government because it's required, as I understand it, to have a NIST evaluation, so NEC, Idemia, and some of the other leading manufacturers. And their systems are remarkably accurate, at the 99.4 or 99.5 level, and that's with essentially no demographic variation. That's not where they were a couple of years ago. This conversation has been going on for a while about bias not being legitimate, because it absolutely is, and that's where some problems started, but they are now pretty good.

But that is also not to say—there are 100+ companies on the most recent NIST testing, and NIST actually put out a report in December that specifically focused on the demographic bias in some of these systems. Some of them are very bad and very inaccurate for any variation of race, gender, regional variations, or things like that. It's very important that anybody who is going to use a system like this, pick one that's good and then evaluate it for their own use case: Is it going to match their population?

The World Economic Forum recently put out within the last six months or so a set of underlying principles for the use of facial recognition, and one of the things was—because they're obviously looking at it from a global standpoint—no matter how well trained or how well tested the system is, is it going to work on your expected population? So if you are a business operating in Asia versus one operating in Northern Europe versus one operating somewhere in Africa you're going to have very different demographic populations, and you need to know upfront if this system is going to work well in the context in which you're going to use it. That's an underlying principle of responsibility for the companies that are going to implement these systems, to make sure that it's applicable.

Other than getting into the technical issues of bias and how that happens, or whether there are ways to train it out, or what else it might mean, I think that is probably the short answer for that.

ALEX WOODSON: Moving on to the pandemic, what have you been seeing in terms of facial recognition technology and the pandemic? How is it being used? How are people thinking about using it? What have you seen in the last month?

BRENDA LEONG: Yeah, obviously, everything in all of our lives right now seems to circle around the issue of the COVID virus, facial recognition no less so.

One of the key strategies for fighting or managing the spread of the virus is tracking, trying to figure out who is infected, where they have been, get them tested, and have people self-isolate if they've been exposed, that whole medical design. So there are all kinds of strategies being used for figuring out what is the best way to track people.

There are location-tracking apps, many of them. Different countries are doing it different ways. Some of those involve various levels of biometrics, including facial recognition. Some countries are trying to experiment with whether it's more effective, whether there are ways to track people using facial recognition in addition to Global Positioning System tracking or whatever other ways there are to do that. I don't know that that is any more or less effective.

There is certainly no consensus right now on one app or one way to do that that has been accepted internationally or globally. Singapore has had a lot of success with the app that it used, and has become a little bit of a model. Other places have tried to implement the same or similar models, which is one that restricts the sharing of information and keeps it local to your phone, so it's a very privacy-respective system.

The other issue on facial recognition in the pandemic is less about identification and more about diagnosis. I know that there are apps or there are systems which are seeking to do things like measure temperature, like is there a way to identify someone with a fever by a scan of their face or other medical scans of some kind that can be done individually and independently without having to see a provider or go to a medical servicing facility? I don't know that any of those are super-reliable at this point in the sense of really being accurate at a sufficient level, but I know there is a lot of work being done on that.

As I referred to at the beginning, that's less of an identification facial recognition aspect and more of a characterization thing. There are huge applications of all of these kinds of things being studied or researched in the health care sphere generally, even apart from the pandemic, of ways to do analysis of people by using various kinds of scans. Many of those coexist with a recognition-level system, because you want to know who your patient is as well as the potential symptoms or conditions they might be experiencing.

So that all ties together, but it can be done without necessarily being an identifying thing. I know there is work being done on that to try to see if there are ways to evaluate people—I know for fevers is one of the big ones, but if it's determined that there are other symptoms that could be measured or identified by some sort of camera-based scan, that's going to be something that people are going to try to pursue for that.

ALEX WOODSON: A very basic question: You sent me an article about mask-wearing and facial identification from the New York Daily News. Does wearing a mask mean that you can't be recognized using these systems? Are people experimenting with different ways to get around that? What's the status of that?

BRENDA LEONG: The answer to both of those questions would be yes. Right now wearing a mask—I don't know what an actual percentage is, but a really large percentage of the time, probably 90-plus percent of the time—is going to foil whatever facial recognition system you're a part of. You're not, most likely, going to be able to get into your phone, you're not going to be able to use it for some of the things you might haven normally used it for, and you're not going to be picked up and tracked by any sort of public surveillance system if you have a mask on.

But there is a lot of work being done to counter that and to make it accurate enough or sensitive enough that, if the upper part of your face above the mask or your ears or other parts of your head are available, there is enough specificity that they can continue to track you.

That article—which was written by folks who have very strong civil rights concerns about public surveillance generally, and facial recognition as one of the tools for that—was trying to make the point of: "Let's not get secure in the fact that we are realizing because so many more of us are wearing masks right now that it does in fact foil existing systems."

But let's not take comfort from that for those of us who are really worried about this as a civil rights issue or a government overreach issue. That's not going to last forever. The systems are going to get better. They're going to get able to identify people even if they are wearing a mask.

There are, in fact, places that already have laws in place that say, "You cannot wear masks or other accessory-type items for the purpose of concealing your identity." A lot of those have nothing to do with facial recognition or pandemic-type issues. Some of them are religious discrimination laws. There is a whole separate conversation to be had there, but it's a fact that there are laws on the books in certain places that prevent that anyway.

So just the fact that we happen to be wearing masks right now and that happens to be foiling facial recognition, the point is, if you're concerned about some of the broader aspects of facial recognition in public places, don't take a lot of comfort from the fact that the masks are stopping it right now because—as I think the article calls it—that's just a "speed bump" for the technology to figure out a way to address.

But that does open up the other issue of why some people are very concerned about facial recognition and why certain cities and municipalities and maybe even some state-level legislators are looking at bans on the use of facial recognition either by public agencies, that is, by government agencies, or in public places because of the concerns about very Orwellian societies, individual freedoms, civil liberties, and things like that.

I am not dismissing those. I hope my tone of voice did not make it sound like I was saying that lightly. Those are very legitimate concerns, and the fact that there are whole cities that are doing that makes it clear that this is a really important fear for a lot of people about the overreach of government.

ALEX WOODSON: Do you find yourself thinking that we should maybe loosen some of the restrictions on government surveillance and privacy at a time like this, or do you think it is just as important to really focus on that issue and keep people's private lives private and not allow government in our homes?

BRENDA LEONG: That's a really tricky question to ask. This is my personal opinion at this point, me speaking for myself only. I think emergency situations or crisis-level situations do lend themselves to emergency-based conversations and potentially solutions, which is not to say that I think that we just say: "Oh, you know what? Privacy doesn't matter right now. Let's just get this done and then we'll figure it out later." I think that is very definitely the wrong approach to take.

To say that we will encourage people to download an app because we really need to have contact tracing or we are going to encourage people to voluntarily participate and contribute their location data at a level that they would not normally do and that we would not normally need because of this, maybe that's okay. Maybe that's fair, because at the end of it you can delete the app and you can take it down. The government or anybody—the app provider, the device provider—may still have a pretty detailed set of location-tracking data on a lot of people, but maybe that's a reasonable tradeoff at that point in order to fight the situation. At least it's a reasonable discussion to have about it, I think.

To say that we just ad hoc lift privacy restrictions by saying—like the Health Insurance Portability and Accountability Act (HIPAA), the law that handles the privacy around personal medical records and health data—we're just going to set that aside for the period of this, I think that's a bad idea. It needs to be very specific to the need, what the value gained is, and have some sort of built-in ability to walk away at the end, to pull back, and put those protections back in place to whatever extent that they're lifted or loosened or whatever that might be in the moment.

So it's hard. I get that that's really hard, and there are people, especially if they're sick or in high-risk categories or have family and other loved ones in high-risk categories and really want to make sure that we're doing all the things that we can do to prevent the spread of the virus, I think that's really important, but I think it can't be just a wholesale "do whatever it takes," lock everybody down in every possible way.

We have seen that we've already made risk-based decisions on which businesses are considered essential, minimizing the risks for as many people as possible, but we still have people working in restaurants, and we still have people delivering things, and we still have people doing various jobs that we say, "It's more important to have this job keep getting done than it is to lock every single person down to the extent possible." We make those kinds of compromises all the time. I think the same thing has to be done when you're talking about personal data collecting, tracking, or using.

ALEX WOODSON: I want to finish off with two questions, looking a bit into the future.

In January you testified before Congress about facial recognition technology, so obviously you're very involved with this in terms of speaking to the federal government. This issue is going to continue on during the pandemic and after the pandemic. Do you see the federal government's response to facial recognition technology changing because of the pandemic, or do you think it is going to continue on the trajectory that it has been on?

BRENDA LEONG: I don't think it's going to be completely direct in terms of direction or interest. The federal government's interest in privacy generally, including personal data and facial recognition issues, has been growing steadily.

The House Oversight Committee that I spoke to in January, that was in fact their third hearing; they started doing hearings on this last summer sometime and have done several. There have been a number of proposed bills, none that have gone very far down the path, but the fact that these are proposed bills that are getting discussed on both the House and Senate sides about facial recognition technology and how to regulate it, speaks to the fact that there is a lot of attention on this and a lot of interest. Basically I don't see that changing. I see that momentum continuing.

At the moment, a lot of things have been put on hold because whatever political energy there is has been redirected to the direct needs of the pandemic or other very high priority issues for the government, but I don't see those things going away. I think we are going to continue to see general privacy laws.

The United States does not have a general privacy law at this point, like many countries do. Ours are very sector-specific. We have student privacy laws, health privacy laws, and financial data privacy laws, but we don't have a general broad privacy law at the federal level. I think we're going to continue to see discussions and proposals of those kinds of laws, and we're going to continue to see laws that are specific.

There have been laws about systems based on artificial intelligence (AI), which of course facial recognition is one of those, and should they be regulated specifically or separately from general privacy? We have seen laws proposed on facial recognition and other biometrics. I think we're going to continue to see that.

But what's going to pass and when? I have no special knowledge or ability to project that, but I think the interest is going to continue and the attention is going to continue. I don't want to say it's going to speed up, but it's certainly not going to go away, and it already had quite a bit of attention to begin with. I think we will see that continue.

ALEX WOODSON: Maybe this is me being a little hopeful, but it seems like this is a less partisan issue than a lot of other issues in politics these days. Is that correct?

BRENDA LEONG: It is. I think a large number of the issues around personal data, personal information, and privacy generally, speak to the entire political spectrum. I think we saw that in some of those hearings. People in both parties, sometimes at the more extreme positions within both parties, share many concerns and share many reasons. Sometimes they have a different underlying perspective on why it matters or what the particular fear they have might be different, but the point is they do share it, and they do see value in some sort of regulation in a way that probably has some overlap that they could reach agreement on.

So, yes, I think that is promising, and I think the parties themselves have recognized that and made that known. Hopefully that makes it something that there can be some movement on. It's an election year. There's a pandemic. Who knows what all the political energies and factors are going to be at this point, but maybe after the election or in the coming year or so, it's something that could get some traction across both sides.

ALEX WOODSON: Final question: When I talk about AI, facial recognition technology surveillance, and all those types of things, obviously a lot of times it goes into a pessimistic, dark direction. To end on a somewhat hopeful note, if you could think forward optimistically—all the privacy concerns are taken care of; the bias is taken out of it—what are some of the most beneficial uses that you see in the near future or maybe the far future for facial recognition technology?

BRENDA LEONG: That's a lot of big assumptions, sending ourselves forward into that semi-utopian state already of really excellent technology without some of the challenges.

Yes, I think there are some opportunities for really useful cases. A lot of it, as I mentioned before, is in the health sphere. There are a lot of ways that scans potentially will provide assistance in diagnosing and monitoring other factors about patient care and treatment that may be able to really advance medical care, make it more personalized, more effective, or more accessible, where you don't have to go to a facility or something like that.

In terms of digital identity—which is a whole separate conversation and is more of a global conversation than just the United States–the whole concept of being able to validate and prove identity is very tied to biometrics generally, facial recognition being one of the primary ways to do that. There is a lot of work being done around the world on ways to help people establish identity based on a biometric in places that don't have good public records. There is something like 40 percent of the world that doesn't have proof of birth, like a birth certificate or some other sort of legal document. I think we are going to see a lot of advances made in the ability of people to establish their identity and thus engage in the opportunity to receive goods and services and to participate in other types of education or work programs because of biometrics being used in that way.

I could give other examples too, but I think, yes, there are lots of ways that biometrics can facilitate good things and useful systems. Hopefully we will be able to see more of those with both the technological challenges like bias and so forth addressed, and also the social implementation aspects like surveillance and overreach or abuse of tracking of minority populations or things like that, overcome with either legal protections or implementations that allow for private use without running some of those same risks.

ALEX WOODSON: Great. Thank you so much, Brenda.

BRENDA LEONG: Thank you. I really appreciate the chance to talk about it today.

You may also like

Ukrainian refugee center in Moldova.

JUN 8, 2022 Article

Ethics & Artificial Intelligence: Migration

With Russia's invasion of Ukraine leading to Europe's worst refugee crisis since World War II, this article from researchers Gustavo Macedo and Lutiana Barbosa details ...

MAY 13, 2022 Article

Ethics As We Know it is Gone. It's Time for Ethics Re-envisioned.

Given the troubling state of international affairs there is reason to be greatly concerned about how ethics is framed or co-opted. To meet this moment, ...

MAY 6, 2022 Podcast

For Companies, Could China Be the Next Russia? with Perth Tolle

After Russia's invasion of Ukraine, the global financial backlash was swift and unprecedented: Dozens of financial institutions cut off their exposure to the Russian market ...