Trustworthy Tech Development, with Julie Dawson

Nov 2, 2023 51 min listen

In this episode, host Hilary Sutcliffe explores . . . trustworthy tech development from another angle, investigating not just fresh thinking, but fresh doing. As part of her work on trust and technology governance, she seeks to understand the processes of those organizations who are taking trust and responsibility seriously from the start, and find out what they do and how they do it.

Sutcliffe explores the practicalities of how a company can provide evidence of trustworthiness with Julie Dawson, the chief policy and regulatory officer of global digital identity company Yoti.

HILARY SUTCLIFFE: Hello and welcome to From Another Angle, a Carnegie Council podcast. I am Hilary Sutcliffe, and I am on the board of the Carnegie Council Artificial Intelligence & Equality Initiative. In this series I get to talk to some of today’s most innovative thinkers, who take familiar concepts like democracy, human nature, regulation, or even the way we think about ourselves and show them to us from a quite different angle. What really excites me about these conversations is the way they challenge our fundamental assumptions. Certainly for me—and I hope for you too—their fresh thinking makes me see the world in a new way and opens up a raft of possibilities and ways of looking at the future.

Today we are looking not just at fresh thinking but at fresh doing. In my research on trust, which began in earnest in 2017, I was particularly inspired by the simple advice of trust expert and philosopher Baroness Onora O’Neill, who explained why the question of how to restore trust is on everyone’s lips. The answer is pretty obvious: First, be trustworthy, and then provide good evidence that you are trustworthy.

We hear a lot about trustworthy technologies and trustworthy companies in the tech space, but how to be trustworthy was what I wanted to know. Trust research is a blizzard of conflicting concepts and ideas. It had me laying my head on my desk and crying with confusion at one point. But gradually, across an array of different areas of psychology, behavioral science, political science, and science and technology studies, I saw this emerging picture of what I am calling “drivers of trust” or “signals of trustworthiness” that were common to all of these areas.

I have distilled these accidentally to the magic seven, and I want to use this as the basis of our conversation today. These drivers of trust are: Intent which is not totally self-serving, integrity, openness, inclusion, fairness, respect, and of course competence. If you aspire to all of these but you are not competent in delivering them, then trust is also lost.

These are things we know. They are issues that we hear we need to do all the time, so as part of my work on trust and technology governance I have been looking at the processes of those organizations that are taking responsibility and trust seriously from the start and trying to find out what they do and how they do it.

Today we are looking at trust and tech development from another angle, and I am delighted to welcome Julie Dawson, who is chief policy and regulatory officer of global digital identity company Yoti.

Julie, you are most welcome. Thank you very much for joining us.

JULIE DAWSON: Thank you, Hilary. It is a real pleasure to be here.

HILARY SUTCLIFFE: Let’s start with, what is digital identity, what is the company for, and what problems is it supposed to be solving?

JULIE DAWSON: It is all about how people prove who they are or how old they are. That is tricky for many people around the world who, for instance, do not have an identity document. We wanted to look at how people could do this simply and safely, and over time that has changed.

If you think a few hundred years ago, it would be that you knew everybody in your local village, and you knew that this was Hilary and this was Julie, but now because we are working globally it is really, really important, and we all read about the fraud hacks around the world. In Australia just a few months ago half of the population’s identity details were hacked, so governments around the world are looking at how you can do this, and we as a company wanted to find out how could we do this for people online but also in person, and how could we help people with documents and people without documents. How could you let people share less rather than more and obviously do it safely?

That is it in a nutshell—proving who you are or how old you are, either face to face, in person, or online, and trying to do it in a way that is safe and understandable by people, using the technologies that exist today, most of us with a smartphone, and looking at how you do that so you can set it up once and reuse it many times. That is the second generation of digital identity. The first generation was uploading a document and just using it on a one-off basis. Those are the two types of ways that people prove identity.

If you look across Europe soon there will be reusable what are called digital identity “wallets” in pretty much every country in Europe, and lots of parts of the world are building what they call “trust frameworks” around digital identity, including the United Kingdom, Singapore, Canada, Australia, New Zealand, and the whole of Europe. We are looking at how people in all of these different areas and in many different walks of life and many different sectors prove their age, their identity, or things about them.

This was something we saw during COVID-19. If you wanted to travel, you had to prove that you had a certain vaccination or that you did not actually have COVID-19 at that time. It could be for employment that you need to prove that you have a clean criminal records check or certain qualifications. That is it in a nutshell.

HILARY SUTCLIFFE: I do remember Brussels Airport frantically trying to prove that I had had my vaccines and was okay. I would have loved to have had the Yoti digital identity and had it as part of that wallet there.

That is like an Apple wallet—other phones are available—so you have all sorts of different things on your phone and in your wallet?

JULIE DAWSON: Yes. There are different ways that it can happen, but I think one of the key things for me as an individual with a reusable wallet, you are setting up once and using it many times, but you know that Yoti does not know that you are using it in the morning to share that you are over 18 on a dating site or using it in the evening in your supermarket to get some wine. Then you might be logging in to pay your taxes or pay a parking fine.

The key thing I think for the individual is that that is straightforward but that there is no surveillance. Yoti does not know what you are doing with each of those different parties, so the business model is that it is free for the individual, and it is those other different organizations that are paying for a check. You might have come across that in the past with things like anti-money laundering or know-your-customer-type checks.

If you think about it in that way, it is a good, simple analogy. You might at the moment in your wallet have all sorts of different things—your driving license, your passport—but then add to them in the future maybe your different qualifications, your memberships, and other things about you. I might want to share my peanut allergy with an airline, or I might want to share the fact that I am a scout leader getting into a scout jamboree. There could be many different use cases where I want to selectively disclose different parts of my identity to different people and to do that safely without filling in long forms and doing it in such a way that is private to me.

We have to obviously be mindful that not everybody will have all of those documents and not everyone will have the latest and greatest device, so one of the big challenges in this world is, how do you look at that streamlining but also how do you look at making that inclusive, straightforward, and understandable for the vast majority of citizens?

HILARY SUTCLIFFE: Let’s go back to the trust drivers. Let’s start with intent because when I looked at distrust of companies often what comes when we feel—often rightly—that they are all about making money at the expense of their customers or society, so often companies start from the perspective of technological possibility: “I have got this tech like a hammer looking for a nail. What can I point it on? Ooh, now I can point it at that.”

You started with the problem in mind in quite an intriguing way and then it grew through your engagement with stakeholders. Tell us a little bit about that.

JULIE DAWSON: I think there were two little genesis points way back. One of our founders ran obstacle races, where you would have hundreds of people lining up in a muddy field, the A–Js here, the K–Zs there, having to prove their identity and sign a health waiver in case they had a heart attack going around the race and hope they get their document back a few hours later. The other founder was doing onboarding for gaming companies and saw that it was actually hard for people without documents, those “thin-file people,” and it was also easy for bad actors to impersonate, and it was a pain in the neck for both sides, the company and the individual.

So they started to look at: “Well, how do we solve this? In person, how do people prove who they are, and how do we also help the issues online?” We started by speaking with lots of people, and we found out all sorts of different things where people were having issues. It could be somebody saying: “I am a young person. I am going to visit a parent in prison. I do not have any ID.” It could be somebody saying: “I don’t want to send my 15-year-old to the cinema with their passport. We are going on holiday in three weeks. Why is the cinema requiring a passport? He or she does not have anything else.”

We saw that there were a million driving licenses lost and stolen in the United Kingdom each year and 400,000 passports lost out and about, be it at Glastonbury or be it on nights out. We saw that there was increasing verbal, physical, and racial violence to shop staff at the point of proving age, lots of different vignettes we got from people.

When we boiled it all down, we saw this area of how you prove who you are and how someone else trusts that you are who you say you are or as old as you say you are was quite broken. That was the genesis of what we looked at: How could you make that simpler? How could you make it something that someone would trust?

The more we looked at it we saw that obviously it is a delicate area and sensitive from that perspective of not everyone will have the document, not everyone will have the right device, there will be people with different disabilities, and it has got to be something that is inclusive of many different ages, genders, disabilities, and nationalities, so it is not an easy gig to get started with.

HILARY SUTCLIFFE: I bet it wasn’t. That is fascinating.

From this approach we can see that inclusion, which is one of the most important trust drivers, is part of the way the company started, but also it is something that you have taken on an ongoing basis. Is that correct?

JULIE DAWSON: I would say by no means is this perfect. Like any company in the tech area we are striving to have a better balance of people with different disabilities, genders, nationalities, and all the different elements, because if you are serving people globally—we have 13 million people who have set up our reusable digital identity—you have to be reflective and thoughtful about the range of people who will use your tech. You are never finished on that journey. You are continually looking at what more you can do both internally within your company but also looking at how can you make the services more inclusive.

For instance, one of the things we looked at was that some people are not comfortable using a document-based approach to prove age, and some people do not have even have a document to do that. They might not have it with them, someone might have a controlling partner, but for a whole range of reasons people might not be able to prove just their age. That was when we started to look at some other approaches, including an artificial intelligence (AI) approach for just proving age.

We also looked at the cost of root identity documents. In the United Kingdom, for instance, but also in lots of other countries, it is quite expensive to buy an identity document, so in the United Kingdom we have looked at lower-cost options, and we are continually looking at the whole range of disabilities. How we can make what is actually quite a delicate process from the security perspective something that people with a range of disabilities can also access with or without support is also another long journey, working on what are called the Web Global Accessibility Guidelines.

It is an unfinished symphony; we keep at it. I would say by no means are we perfect, but we definitely keep on trying to improve that inclusion and diversity.

HILARY SUTCLIFFE: What I liked about Yoti as a case study when I looked at this and partly why I invited you on is that I did some work recently on stakeholder engagement and how starting with the most impacted by the problem is a great place to be, but usually you start on the most influential of them, the noisiest, and you do not bother to ask those questions at all. I liked that in lots of different areas you have been starting with who is the most impacted by the problem, who would be most impacted by the solution, and who the solution might not be accessible to and what to do about that. I was quite admiring of that.

I saw also, and this was part of your seven company principles, “Always act in the interests of our users.” I am actually quite skeptical of values. I read a great study from the Massachusetts Institute of Technology, which said that for companies their actions and those espoused in their values were quite often reverse-correlated, according to their employees, so the their biggest value—“We are” whatever—their employees said actually you actively do not do that.

This is sounding very impressive already, but it is about walking your talk. Could you talk us through some of the processes that could be relevant for all sorts of different companies not in this quite complex sector that you use to guard against being accused of trustwashing, ethics washing, and those sorts of things, and how do you show the evidence of your trustworthiness in this regard?

JULIE DAWSON: You are right, Hilary. It is not straightforward, and every day it is something that in my role I keep looking at what more can we do and how can we ensure that new people in the company understand this commitment that we have made.

I think one of the things we did from pretty early on was getting a group of people, our external advisors, who had even more expertise than anyone internally on areas like the human rights side and consumer rights, looking at accessibility and online harms, so that every quarter we have to go to them. They are scrutinizing each week all of the internal communications (comms) within the company, and we have to explain what are the new things that we are doing and why, how things are progressing, and that is quite full-on.

A lot of people said to us, “You are mad to expose yourselves like that.” The terms of reference are published openly, the minutes are published openly, and each of these experts is an expert in their own field, and it is obviously more important for them their track record in human rights or consumer rights than actually their role at Yoti. The balance is such that that is where their integrity lies, so they will point out if they do not think on the human rights or the consumer rights area that we are doing enough.

They are there to challenge us, and we have built that in as a mechanism, but we have also tried to build an internal group of people from all different teams in the company to be extra eyes and ears. I cannot be everywhere. My team of regulatory and [olicy do not sit in every other team, so what we in a way need is to build up that thinking and the conscience element and questioning, the sort of thing: “What would you not be comfortable to tell your mum, your gran, your dad, your child about? Is there anything in your work today that you are not comfortable about and feel confident that you can tell your group of people within the internal ethics group about this and that that group can pass things on to the external group,” so a pre-whistle blowing.

Everybody has seen the Frances Haugen story in the last two years. What most tech companies would rather is preempt that and get things out in the open and thrash them out before you get to the whistle-blowing stage. In a way that is one of the things we have tried to do.

I would say the second one is probably with the B Corps. I don’t know if lots of listeners will have heard of B Corps, but through B Corps you have a couple of hundred metrics which you are reviewed on, all sorts of different areas—effectively how you treat your staff, how you treat your suppliers, your customers, your governance, environment, et cetera—and you have to literally show evidence on each of those over 200 sections.

I can share the Excel with you later, Hilary. Have a look because it is not something that you can just skirt over. It takes a lot of work to go through, be it your male/female pay differential or be it how you say goodbye to a supplier where it has not worked out. What are the different things and what is the evidence you show for that? Doing B Corps quite early on was something that forced us to be more thoughtful in a much broader set of areas, before we even had any customers. That is something that is more of a movement than just a box-ticking exercise.

I would say inviting scrutiny is something we have tried to do, and we have done that also through roundtables, bringing in regulators, nongovernmental organizations (NGOs), and different bodies on one of our key AI products on the age-estimation front, and that has been something that actually one of our guardians suggested we did. Initially we did it even before the product was brought into market. That teased out what the intended and unintended consequences could be, both positive and negative, of deploying the tech.

A lot of things a few years later that we see now in the AI Act and the AI matrices that companies are preparing was something that actually that group forced us to do quite early on.

We keep at it. It is not easy as a small and scaling company. You still have all of the challenges: You have to get to profitability, you are still trying to balance the books, pay your staff, and do all those other things, but at the same time you have this long-term ongoing investment that you are trying to do it in a transparent way and trying to do it in a way that stands up to the scrutiny.

HILARY SUTCLIFFE: That is great. What I love actually about this too, and you have got the other trust drivers. You have got accountability and integrity in there, inclusion, and of course openness, so there you have the full suite of trust drivers in some of the steps that you are taking.

Back to my learning on trust and also a very large number of years on corporate responsibility, this idea of seeking out challenge, seeking out negative input, and psychological safety internally, are probably the three central pillars of trustworthiness but also the three main causes of distrust. I speak to lots of people internally in tech ethics who actually say, “We have not got psychological safety at all,” in different technologies. “They just do not want to hear. They say want to hear, we tell them, and nothing happens.”

I hear that—I am doing something at the moment and I cannot name the type of organization—the whole system, academia as well as business, does not value psychological safety and does not value negative opinions. There is a great project called the European Environment Agency’s “Late lessons from early warnings: science, precaution, innovation.” This looks at all the big problems, whether it is Chernobyl or various environmental issues. There is always someone telling you, there is always someone who has been telling you probably for years if not decades, and they have been ignored because they do not look important enough, they are not “one of us,” or they are internal and too junior.

I think the biggest finding of my trust project is the importance of listening to the opinions of people whose opinions you do not remotely value because they are the ones who are telling you something that is going to come and bite you. That is one of the reasons why I am so pleased to have you on here because I have been looking at your company and talking to you and others, and you do walk that talk.

Let’s hope not, but I do not see any Frances Haugens coming out of Yoti as we hear with lots of other companies saying “We have heard this and they do not take any notice,” so well done for taking that on so early and in such an integral way.

JULIE DAWSON: Pride comes before a fall, so we try not to say that because in a way trust is built slowly. For us most of the time we are trying to keep the head under the parapet and just keep at it rather than actually celebrating it. I think that is the moment when you lose it. We try just to keep at it bit by bit by bit, small step by small step, because that is the only way to go about it, and there is never a moment when you can say you have done enough on any of the different axes. I think we just keep at it. That is about as self-congratulatory as we can get.

HILARY SUTCLIFFE: That is why I was very grateful that you came on, because you are putting yourself above the parapet by talking about this in the round, so thank you very much for taking that chance on this podcast as well.

The tech story is not black and white, and I think your area, as we have seen, is particularly complex, and regulation is one of those areas. I think like many tech companies you are ahead of the regulators, and there are ethical dilemmas there: You have to have good-quality regulation in such a contentious area, there is existing regulation in all sorts of different areas, but if they are not properly anticipating the regulation or if there are different actors in the sector who do not care, you have to take the lead on regulation, and then you can very easily be criticized for being too close to the process or capturing the process, so this balance of leading to get the right regulation and being seen to capture that is tricky, but I know you have thought about that and been doing some work in that area. Tell us how you are trying to juggle that.

JULIE DAWSON: In some areas we were ahead. I would say, however, there has long been regulation around anti-money laundering and know-your-customer for, say, financial services and gambling. One of the things that has changed, however, is that governments around the world are looking at where the emperor had no clothes basically with the old forms of identification.

If I were to buy online a cheap fake passport or a cheap fake driving licence—probably the cheapest document—or say in today’s world I am a bad actor and I go and look at a lost or stolen version, I look at details from hacks. Because all of that is out there in the wild that is why we have to think differently.

We have lived through the Equifax hack. People in Australia lived through the Optus hack. A lot of data is actually out there, and people can buy fake documents, they can buy counterfeit documents, they can buy or make tampered documents, and some of them will apply for fraudulently obtained genuine documents. For all of those reasons the old-fashioned gospel truth, saying, “Oh, this is an identity document and must be great,” does not wash anymore.

You have to look at: Is this document actually authentic? Is it actually Hilary who is presenting this document, as in, say, a driving license or passport, or is Hilary presenting someone else’s document, and if that is actually Hilary’s document, has she tampered with the 6 or the 8 at the end of some digits to prove that she is over 18 or under 18, something like that?

Many regulators around the world for a long time have had a bit of an ostrich approach and wanted to say, “Oh, it’s a document; it’s fine,” and we saw just as recently as in COVID-19 that, for instance, I could be applying to host a Ukrainian refugee or I could be applying in COVID-19 to be a volunteer, and it was just saying to upload a document. Well, I could actually be uploading this cinema ticket thing I have here or it could be a kiddie’s Legoland driving license, if you are not checking that it is actually a valid license, or it could be Fred Bloggs’ license and not mine, a whole range of different circumstances.

One of the things in our world is looking at: Is this an authentic document, is it the same face in this document as on the actual one, is this a live face and not a mask, a 3-D image, a hologram, or a fake or synthetic image? And then, do all of the different elements stack up? You might have to check in addition to a driving license database, a database of lost, stolen, or fraudulently obtained genuines. That is the sort of the thing when you look at the fraud landscape that companies like ours do.

Yes, in some respects there has been longtime regulation around how you might do gambling or how you might do travel, or how you might set up a bank account. However, looking at digital identity and how that fits into the mix is partly because of the fraud landscape that has evolved.

Now there are a range of other sectors—the online safety sector, areas of discussion around age-appropriate access—which are changing, so many regulators around the world are saying: “Hmm, well, we wouldn’t let a ten-year-old do this offline”—going to a strip club or buy alcohol—“why are we letting them do this online? We would not have a whole bunch of five-year-olds in this park live streaming to a bunch of 50-year-olds, but we are letting them do that online. Is that right?” So age appropriateness is something that our company has been building technology to meet for quite a few years, and digital identity, either in that reusable way that we described or the one-off transactional, is another area.

E-signatures is another area where around the world governments are starting to think, Hmm, if I am buying or selling a house, do they really want to know it is me who has signed and actually link that to my government-issued identity document to check it is not just an email to-and-fro that someone has intercepted?

So yes, we have been ahead in some respects and we have spent a lot of time trying to engage with those governments that are building trust frameworks around the world and to input, and the same in the age-assurance area. We have been inputting in the standards development around age assurance and also with organizations that have been looking at how should age-appropriate design be put into practice. If I am giving parental consent, is this an adult who is over 25, would be an example.

Yes, we have tried to be ahead in some areas. In other areas many others have paved the path ahead of time through other regulations. Yes, we have taken part in sandboxing, which would be one of the ways in which we have worked jointly with regulators, and through these regulatory roundtables that I have described.

HILARY SUTCLIFFE: Obviously things are changing fast. Some of these new image-making technologies must be causing a bit of a headache. What is causing you to lose sleep now? What about some of the difficulties you are facing, and what about some of the things that are moving so quickly in technology? What do we have to worry about, and what are you worrying about?

JULIE DAWSON: Let me split that into a few areas. I would say globally for digital identity looking at what is the parity of acceptance, that governments actually realize we cannot do that ostrich approach and say that just physical documents are the way to go—that ship has sailed—so bringing in regulations to enable digital forms to be used; that parity of acceptance of a digital version with the physical is one big discussion. That is happening in the United Kingdom, Australia, Canada, and a few other parts of the world. It is starting to happen in Europe, but it has been very slow.

We have been at this for ten years and some of those regulations are still only just coming in. For instance, in the United Kingdom you still cannot go into a supermarket and prove your age using a digital form of verification. It still has to be something physical shown in person. There might yet be a change in the next year or so, but that will require a change in primary legislation to enable that digital form to be used.

I would say another big area is where people get crosswired and they think that just because, say, the word “facial” is used that it is always facial recognition and always surveillance. However, if you are setting up a digital identity and coming back to it, you need to know it really is Hilary reopening her digital identity, so you are doing that match to check that it is you.

We mentioned that there are a billion people on the planet who do not have a document. In the United Kingdom that is 24 percent of adults and 33 percent of young people who do not have an ID document. To prove age they can do a facial age estimate, which just detects, is that a live face, analyzes it, and gives the result. They do not require a document for that; that is just detecting a face, analyzing, but not recognizing. There is no unique recognition that that is Julie or Hilary in the supermarket buying the bottle of wine, proving they are over the age of 18 or 25. We spend a lot of time explaining the difference between a facial recognition and a facial age estimation.

We spend a lot of time talking about liveness detection, that there are good standards, for example, by the National Institute of Standards of Technology (NIST) in the United States, where they give a thousand attacks against your system, and you have to literally withhold 98 percent of them to get that NIST Level 2 for liveness detection, which is crucial within the digital identity but also within the facial age estimation I have described.

We have to spend a lot of time with journalists, regulators, and civil society groups explaining how you do this anti-spoofing, explaining what is broken and not perfect with today’s world of showing documents so hence the need to rewrite some of those regulations, but also explaining what is under the hood when there is a document check. You are checking that liveness, you are checking that the document is authentic, you are checking the face match, you are checking security features, and you might be checking lost and stolen databases. We have to explain all those things under the hood and that in person that would not be happening. Because this is done digitally that can happen in the background, but then that has to be checked and audited independently to see that it is happening and is happening for the right use case.

Those are some examples, without being exhaustive, of the types of things, that parity of acceptance, and explaining the nuances around recognition, estimation, and liveness detection.

HILARY SUTCLIFFE: Interesting. Very complicated area, and I am sure there are a million questions that listeners would be asking that I should be asking but have not asked.

This cannot have been a smooth journey. Could you give us some insights about what has gone wrong, what you have learned, what you would have done differently, and perhaps what advice you would give to other businesses looking to put trustworthiness at the heart of what they do and talking to others as well, getting bombarded with different perspectives and people disagreeing? It is tough. Are there any lessons for the rest of us trying to do that as well?

JULIE DAWSON: Absolutely. There are lots of things with 20/20 hindsight that we would probably have done slightly differently. We tried to build our reusable digital identity right from the get-go, which was probably slightly too early for regulators and consumers. They all took quite a bit longer to get around to that way of thinking. It was great for experience. We got a lot of lessons learned, but the world was still in that transactional, do-it-once-and-have-to-keep-on-redoing-it, so we are just now catching up that.

We kept asking way back in 2014 and 2015: If I were to do a check to the electoral role that Hilary definitely was on when over 18 I would not expect to see a little photocopy of that or a screenshot of Hilary on the electoral role, yet with a document we saw that was continually asked for, a copy of the actual passport or driving license by, say, a fraud compliance officer in a bank or insurance company. They did not trust the fact that a check had previously been done.

We kept saying, “Ah, data minimization—surely that should not be necessary,” but actually a lot of this is change management and it might take five and in some parts of the world it might take them even ten years to come to that conclusion. Compliance and fraud managers are probably on the earliest of the adoption curve. They never got fired for choosing IBM and never got fired for choosing what had worked for the previous 30 years.

One of the things we had to understand was what was the market ready for and how do you start that education and the thought-change management process of explaining why identity documents have got some flaws and how data minimization can work. That would be one thing.

We set up our guardians council, which we have described, and I think probably COVID-19 hit us and we all of a sudden woke up to the fact that our guardians had actually nearly served two terms with us, and now we are in the process of having to get our next guardians found and up to speed over this next 12 to 18 months to pass on the baton. If we had been more prescient in the midst of COVID-19, we would have thought about that a little bit earlier.

Accessibility I think is a big early area. Had we got expertise literally in-house from the get-go that would have saved a lot of rework, education, and training. We just did not find quite that right set of expertise right at the beginning, and one of the things that organizations that build these products are only learning later is that the sooner you do that the better.

It is a bit like, if somebody asks me, “When is the best time to buy a tree?”

“Ten years ago.” You should have already planted it. It is a little bit like that with some of these things with accessibility. The sooner you start, the better, but ideally you should have started right on day one rather than building it in retrospectively.

The B Corps that we set up was hard to do right at the beginning, but it did force us a lot earlier on to think more broadly across governance, staff, suppliers, customers, and environment, which we never would have managed off our own bat. It really did stretch us. It was not at all easy, but I think it was something that was inspiring to me. Other B Corps from all walks of life, be they a plumbing business, a café, a law firm, or someone like a Danone, to see how these companies of such different sizes and sectors were also trying to tackle this journey.

There are other ones like that. There is the Better Business Act. There are things like looking at how you can be a disability-competent employer, living wage, fair tax, look at things like the Employees Initiative on Domestic Abuse, how you employ a diverse set of staff, things like Tech Talent Charter. Lots of those are things that I think a company, even at a very small scale, can start to look at and think, Which one of those can help us on our journey?

That will also help you attract staff that have a mindset that appreciates that. I think those are all things that are self-fulfilling prophesies. If you start going down this route, the staff you attract will also be ones who are buoyed to appreciate that.

HILARY SUTCLIFFE: A lot of them are, as you call them, “trust marks.” There are a lot of organizations that sign up to a lot of initiatives—we have 167 principles for AI at the moment—are trust marks in one way, but they are also considered sometimes to be just greenwashing.

It is a difficult balance for the people who are trying to do things right because providing this evidence of trustworthiness is not straightforward, and I think it is interesting from your point of view how trust is so important in this area. The more we hear about, “Oh, online hacking, ooh,” perhaps the less likely we are to trust digital identity, perhaps the more. I do not know if you are finding that the trust in your sector and in yourselves is changing in the way that the trust in digital technologies as a whole changes in lots of ways. How are you finding that?

JULIE DAWSON: It is a hard one there, Hilary. I am trying to tease that apart.

One of the things that we have seen through taking part in two of the citizen’s dialogues—we took part in the Ada Lovelace Institue’s Citizen’s Biometrics Council and we also took part recently in one from the Department of Science, Innovation, and Tech with UK citizens about digital identity. Those were each a real kick basically because you had people in a raw fashion, very unfiltered, saying exactly what their beef was, what their issues were, and what they were concerned about. That was so unfiltered that is really refreshing. We could only have accessed that in that fashion. I think if someone were to come into your office as in a focus group they are not going to be quite as raw and brutal as that. I think it has been good to have the chance to participate in those, whereby it is brokered so that people feel like they can in an unbridled fashion say exactly what they do and do not like about something.

I think that plain-English angle is something that again is a good bit of advice and something that in both of those, people said: “Don’t bamboozle us. Give it to us straight. Put it in straightforward English.” That was something when we were bizarrely working on the age-appropriate design things. It said, you have to devise everything for a reading age of about ten or eleven, and if you do that, looking at what broadsheets aim at, it is the exact same thing. We should all be looking at how all of our comms are written in a straightforward but non-patronizing fashion because then, be they a young person or someone of whatever different first language, they can understand the concepts. You find the same with regulators. The more straightforward the better.

HILARY SUTCLIFFE: Absolutely. That is something very integral to the trust driver of openness.

I have a little side hustle on bullshit, where I do workshops with organizations about their obscure language and why they are using this ridiculous turgid guff or awful buzzwords, what it means about them, and what it says about them. Why don’t organizations communicate properly? It is difficult, but it is so obvious.

JULIE DAWSON: Legal contracts is a prime example there when you see these 48-page terms of service or 52-page privacy policies. One of the things that was also good with the age-appropriate design work was that they were forced to have co-design sessions with young people, look at gamification, and look at what were the key elements —how do you include some graphics, how do you make it something that people want to get through and read?—and you might go up from 1 percent of people reading them to maybe 2 percent or even more summarized ones, but yes, you have to start going down that route because otherwise nobody will understand actually what they are signing up to.

I remember a classic one when we worked with young people. They did not understand what a third party was. Probably if we had done the same with a bunch of adults, they might not have understood, but they might have been too ashamed to say and might never have given us that feedback, but by doing it with a group of 10- to 14-year-olds we got some brutal feedback.

HILARY SUTCLIFFE: This is our second series of the podcast. In the first series a number of people talked about the engagement of citizens and the power of these citizen-engagement projects, whether they be assemblies or juries or different types of feedback, and how difficult and challenging they are but how you never get that sort of information anywhere else and how useful they are for everybody involved.

I am also interested in how you use your guardian council, how they contribute, and how you make these balances in some quite tricky areas, particularly in the public interest where what might be in your interest may not be in the public interest. Can you perhaps give us some examples of that?

JULIE DAWSON: The whole area of voting has been one that is very delicate. In the United Kingdom one of the things that was proposed was, should ID be mandatory for voting? You could say: “Oh, you are a digital identity company. Of course you want there to be ID for voting.”

But when we looked into it, we thought, Hmm, should the two actually be linked when the evidence is very mixed about that?

Our end view was that we did not see a rationale as to why the two should be coupled. Should that be brought in, should there be consultation, should it be made mandatory, then downstream if people want different ways to show it so they are not losing their documents, and perhaps there are lower-cost ways of doing it through, say, a citizen card, perhaps. But we would not want to go out ahead of time and say: “Yes, this is brilliant. We think there should be ID for voting.”

Another area our guardians were really hot on was this area of should electronic voting be supported by digital identity. That could be looked at in a couple of ways. If it was an elected representative, say, during COVID-19, and you have got people of all different ages as part of some chamber and they are remote voting, we thought potentially. However, with citizens voting either at a booth or remotely one of the things we had heard is that you could get terrible fallout.

Say you have Country X. You will always have a winning party and a losing party. Our guardians were concerned that even if our technology did what it said in the can and allowed Fred Bloggs and Mary Smith to actually vote, one side that won and one side that lost, you could find that you get terrible fallout because of the fact that the side that lost was not happy.

They said to us that voting can be so emotive and toxic that for the time being only get involved in an elected representatives’ voting and stay away from citizen’s automated voting. Maybe look at it again in five or ten years, but their strong view, particularly from our representative who was a human rights specialist in Latin America, was that there had been so many turbulent experiences of voting with identity in Latin America that it was just too toxic. That was one of the areas where our guardians said, “Hmm, stay clear for the time being.”

The other area that they spent a long time looking at with us was around how could our platform be misused and how could the trust in the Yoti platform be something that could be abused. This was difficult because you might say: “All the digital identity should be doing is saying this is definitely Julie or definitely Hilary, not to say I am a nice person or that Hilary is a trustworthy person. However, should I be going on a marketplace and selling LEGO and then battering that person unconscious in a car park several times over” that in a way could be taking away the trust of other persons and saying, “Oh, this person was Yoti-verified and they did the following.”

We spent probably the best part of 18 months with lots and lots of different views for and against, and in the end we said: “If you had a nightclub, you would probably have your own set of rules of how you run that nightclub, what the dress code would be, what the behavior code would be.” So we decided we would say: “Well, if it is just proving who you are and no one else is involved, like opening up your bank account, paying your parking fine, accessing a payment or a disabled parking spot with your digital identity that only involved you, that you should always be able to do.”

However, if we were working with sharing platforms or marketplaces and somebody abused that trusted position in a community, we would reserve the right just for that sort of sharing or meeting use case to remove somebody’s access with that platform of them re-going onto the platform to, say, the marketplace, albeit knowing that it could be difficult to get the evidence.

We are ten years on and we have not had to invoke that clause, but it was the sort of thing that literally took us a lot of time thinking through how we would deal with repeated either violent or fraudulent misuse that could bring mistrust amongst users. It is one still that we have not had to test at large scale but that really occupied our minds.

HILARY SUTCLIFFE: That’s fantastic. Let’s close it off, Julie, with one final thing: If you had to give one lesson to listeners, particularly companies that might be looking at these steps themselves, what would you suggest?

JULIE DAWSON: I don’t know if I can think of one specific silver bullet, but just keep listening. You never get to the end, but if you can build mechanisms to let your staff tell you what they are not happy with—your clients, your suppliers, your peers, roundtables or others—you have to open up the door a bit for that, open up the kimono, open up your offices, and look at different ways where you are basically inviting yourself to get hammered and scrutinized. It is better feeling that pain early on and getting bad feedback early on than later. That is probably the biggest one.

Early stakeholder involvement makes it sound very nice and shiny. It is not. It is quite hard. We have some sessions upcoming where we are speaking with some NGOs and speaking with people in difficult stages of their lives, and you are trying to get some raw feedback. I think finding ways to keep on with that listening and being open to feedback is probably one.

I would definitely not say we are perfect at diversity and inclusion, but if your staff is all built in the same way they are not going to be able to build a product that serves a global community.

Probably the plain-English one would be the final one. If you can express something so that your child, your parent, and a very time-pulled politician can all understand it, then it is much more likely to be digestible.

HILARY SUTCLIFFE: Those were the findings of my trust project, Julie. Thank you very much for that. I did not prompt you in any way.

Thank you very much, Julie, for sharing your thoughts, sharing with us so candidly what your company does, and very good luck to you in the future.

JULIE DAWSON: Thank you so much, Hilary. It has been a real pleasure and privilege to join you here today and great to learn more about the work of Carnegie. Thank you.

Watch the Full Video

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

NOV 16, 2023 Podcast

Ethics, with Christian Hunt

In this episode, host Hilary Sutcliffe explores ethics from another angle, with Christian Hunt, author of "Humanizing Rules: Bringing Behavioural Science to Ethics and Compliance."

OCT 19, 2023 Podcast

Technological Progress, with Simon Johnson

In this episode, host Hilary Sutcliffe explores technological progress from another angle with MIT's Simon Johnson. Does technology really make our lives better?

JUN 13, 2023 Podcast

Accidents, with Jessie Singer

In this episode, journalist Jessie Singer challenges our conventional thinking on accidents, discussing why the majority of accidents are predictable and preventable.