Ethics, with Christian Hunt

Nov 16, 2023 54 min listen

In this episode, host Hilary Sutcliffe explores . . . ethics from another angle, with Christian Hunt, author of Humanizing Rules: Bringing Behavioural Science to Ethics and Compliance.

It's mind-boggling how many principles and guidelines are available on creating ethical cultures or delivering ethical technologies. But these are often high level and abstract, easy to talk about, and hard to do. Hunt’s book explores ethics not top down from the c-suite, but from the bottom up; using behavioral understanding and decades of hands-on experience to help organizations look at ethics from a human perspective, and design the rules and process that make ethics stick.

Ethics Hunt FAA Spotify podcast link Ethics Hunt FAA Apple podcast link

HILARY SUTCLIFFE: Hello and welcome to From Another Angle, a Carnegie Council podcast. I am Hilary Sutcliffe, and I am on the board of the Carnegie Council Artificial Intelligence & Equality Initiative. In this series I get to talk to some of today’s most innovative thinkers, who take familiar concepts like democracy, human nature, regulation, or even the way we think about ourselves and show them to us from a quite different angle. What really excites me about these conversations is the way they challenge our fundamental assumptions. Certainly for me—and I hope for you too—their fresh thinking makes me see the world in a new way and opens up a raft of possibilities and ways of looking at the future.

Today we are going to talk about ethics from another angle. It is mind-boggling how many principles and guidelines and frameworks are available on how to be an ethical organization, create ethical cultures, or deliver ethical technologies. AI ethicists report 228 ethical principles for AI alone, and there are many hundreds more not on their list and many thousands more in different sectors.

But interestingly, pretty much all of these start top-down with high-level principles and values, very easy to talk about—and I have done a few of those myself in my time—and very interesting to do, but then some poor devil has to make them happen in real life in the organization. They have to design rules, the communications, the compliance mechanisms, the incentives/disincentives, the training that encourages or even compels people to do one thing and not another thing, and that is really much harder.

This is where the work of my guest today comes in. He helps organizations look at ethics from another angle, the bottom-up, the humans delivering against the principles that have been created from the top.

Christian Hunt specializes in human risk, and that is the risk of people doing the things they should not do or not doing the things they should be doing or you want them to do. He is also the author of a great new book called Humanizing Rules, which is a really helpful and practical guide to designing rules and processes that help make ethics stick in organizations.

Christian was previously a managing director and head of behavioral science at UBS; prior to that, chief operating officer (COO) of the Prudential Regulation Authority, a subsidiary of the Bank of England responsible for regulating financial services, where he helped shape regulation after the financial crash of 2008—that is a job. He knows a lot about designing rules with real experience in the sharp end of what works and what does not.

Christian, welcome and thank you so much for joining us.

CHRISTIAN HUNT: Thank you so much for having me, Hilary.

HILARY SUTCLIFFE: Let’s start with an introduction to human risk, what that is and how that relates to ethics generally, if you would.

CHRISTIAN HUNT: Human risk came from a realization I had. You talked about my career path. Well, I had a very unusual experience of eating my own regulatory cooking. I had joined the regulator, I always have to stress, post the 2008 crisis, and I joined in supervision, which is the bit of the regulator that faces off against firms.

I spent most of my time focusing on one firm. As your listeners who work in regulated industries will be aware, if a regulator is looking at you it is not because you are doing a great job. The firm that I spent most of my time looking at as a regulator was UBS, so I had rogue traders to deal with within seven days of taking my role, a ton of things that were not going right in that firm. So I spent a lot of time looking at what was happening there.

I then became COO of the regulator and realized I was not cut out to be a central banker, so I did something very unusual. In many countries you cannot do this, but in the United Kingdom you are allowed to go from the regulator to a regulatee and vice versa, the idea being you get a flow of understanding on both sides.

I have this unique experience of eating my own regulatory cooking. I was working in risk and compliance, having to implement rules that I had imposed on the firm, and what I started to realize there was that we really were not landing in the way that had been intended. The regulatory concept had been passed down to the firm and they had various restrictions and rules placed on them as well as the general rules that applied to everybody with a view to preventing the mistakes of the past, and what was happening was it was not landing with people.

I was being sent on training by myself, because it was an email by myself as the head of compliance and risk telling me to go on a particular training course. So I went to the training course and it did not really do anything for me, or I would own policies and rules that I did not really understand or see the point of. And it helped.

I was in one part of the business that was distant from where all the things they had done wrong, so I was responsible for the asset management business, which had not had any issues. Lots of rules were being imposed on the asset management business that worked very well for the investment bank or wealth management where there had been issues. So I was stuck with this problem of owning some rules and processes that I understood the genesis of and had been partly responsible for but I just saw were not working as designed.

The second realization I had was that whenever things went wrong there was always a human component involved, either people causing problems in the first place or making them worse by the way they reacted or did not react to them.

Suddenly, there was this lightbulb moment that what we are missing with all of this is the human element of it, that we are not thinking about the people who are at the end of all of these things—the people who have to follow our rules, the people who have to attend our training, the people who make decisions on a day-to-day basis—that can determine whether an organization is ethical or not. Those are the people whom we need to get to and we need to influence, and yet that was not happening. This was not a criticism of UBS. This was just generally the way we went about solving this problem.

I think, to be honest, I got to a stage in my career where I realized I could not just let this lie and I had to do something about it. So I was trying to square this circle about how we do this.

In trying to get attention for this I came up with this idea of human risk. I thought, Well, if I can have a snappy title for this internally and I define it in the right way, I can talk about this being the largest risk facing the firm because humans are always involved in these pieces.

It began as an adventure that I started at UBS and then took to the wider world in 2019 of looking at this challenge of saying: In the 21st century we employ human beings to do things that technology cannot. If a task is predictable, repeatable, we tend to give it to technology, or if we do not do it at the moment, we will do that.

What are we getting people to do? Well, the answer is things that involve nuanced judgment, emotional intelligence, personality. That is when humans are at their best, but it is also when they pose the largest amount of risk; it is also when they start thinking creatively and doing interesting things.

The challenge I see in the 21st century is really: How do we get the best out of our people? Human risk is a simple formulation that says: Look, if we are employing people—i.e., most organizations at the moment—then we need to be thinking about the risks that they pose; and if we recognize that they pose a risk—people often talk about “our people are our biggest asset,” and that is correct, it is a positive framing; the negative framing is people are your largest risk.

If we want to think about what that means, we had better start to understand people and what drives them to make the decisions that they make, and let’s have a realistic view of what drives those humans’ decisions—not what we would like to have driving those decisions, but actually what is likely to drive those decisions—and then we can start to try to influence that more effectively.

What I start to look at under the human risk banner is to say, “Well, let’s recognize situations when humans might do something that we do not want or not do something that we do want them to do and then try to design an environment, and that includes rules. I use the term “rules” and my book is called Humanizing Rules. “Rules” is really a proxy for everything, for any attempt to influence people—so yes, rules, policies, and procedures obviously, but also the design of controls, the design of communication, the design of training, anything that we are doing to try to influence people to look at it and ask, “How can we do that more effectively to recognize the challenge it poses?”

If we take ethics as one of many topics that human risk covers, what we would be asking there is: What ethical outcomes do we want? Let’s identify who the people are who might make decisions. Of course, every single employee has a potential to make unethical decisions, but there will be particular moments where that risk is enhanced, so let’s ask ourselves: Which people do we need to influence, under what circumstances do we need to influence them, what might they do that we do not want them to do and not do what we do want them to do?

Let’s think about intelligent ways of getting through to them, and let’s use a very broad toolkit to do that where we recognize that sometimes the answer will be training, sometimes we need to just literally stop them from being able to do it, and sometimes it will be awareness training.

Sometimes it might be overriding their natural tendency. It might well be that a natural tendency of someone in a particular situation is exactly what you do not want them to do but in other situations their natural inclination may be exactly the thing that you do want them to do.

If we start to think about ethics as a behavioral challenge, we can start to see that we need a broad range of tools to be able to manage this and we really need to understand what makes people tick and, therefore, how we can influence that in a positive way.

HILARY SUTCLIFFE: Fantastic. Thank you.

What I also liked about your book and that I saw more clearly—and this happens with ethics and other areas of compliance—is: Are we concentrating on trying to prevent one bad actor or are we really helping an organization get the best out of the majority of people who do not act unethically deliberately but organizational issues or personal issues or incentive issues steer them unethically?

I was quite intrigued that you really take that approach, don’t you, that actually it is all about helping good people do things well?

CHRISTIAN HUNT: Yes. I start from the very simple premise that says, “When we think about this problem, it is much easier for us to think bad things happen because we’ve got bad actors doing bad things.”

Of course, if you think about the movies, that is always reinforced, and we have the goodies and the baddies and the villains doing terrible things—a James Bond movie is a prime example of where that sort of stereotyping happens—so we tend to think about those things.

Of course those people do exist. There will be people within any organization who are for a multitude of reasons setting out to deliberately break rules and do bad things, and of course we need we stop those people, we need to catch them, we need to prevent them, and all those sorts of things. But that is not the majority of people. It is a very, very small percentage of people.

There will of course be the opposite end of the spectrum, which is the people who never ever ever do anything wrong, the goody two shoes. They are also suspicious, by the way, and we should watch that because that is not normal either.

But those two groups of people we should park or put aside because if you think about something like training, if I am setting out to do the wrong thing, I am not paying attention, I am not trainable, I am not listening to anything you are saying. I will show up for training in order to be seen to be there so I do not appear on your radar, and I might be paying attention to see what loopholes I can find, and I will be reading policies to see what loopholes I can find to get away with whatever it is I want to get away with. These people are not trainable.

What we need to be thinking about is to say, “Look, If the vast majority or people we know will show up to work, to try to do the right thing if circumstances permit, let’s try to focus on that.” If we send a signal to them that we do not trust them, that we think they are all criminals or potential criminals, then at some point they are going to respond in kind. We all know this from our personal relationships where “You don’t trust me, so I am going to return the favor, thank you very much.”

I think the focus we have is very understandably that the bad actor looms large in our thinking, but actually that is incredibly unhelpful because what we need to start thinking about is to ask: How do we get to the vast majority of people who given the chance would do the right thing and let’s try to steer them in the right direction? We will not do that if we send the signal to them that they are not trusted, and that is the big risk that I see in that focal point.

I’ve got to emphasize here what I am saying is catch the bad actors by different means—that is what you have monitoring and other programs for—but if we are looking at things like training and designing processes, let’s start from the premise that people do things for the right reason.

One of the other reasons why we would want to do that is if we can make it as easy as possible for people to do the right thing, then when we have people doing the wrong thing we have removed air cover because what we have basically done is we have made the path of doing the right thing obvious and clear, so for anybody who does not do it you have removed all their excuses; whereas, actually if we are making the focal point of everything around all the bad people, we often design processes of having incredibly long sets of rules that people can say, “Well, that was a bit confusing, I did not really understand that.” You provide air cover for these bad actors.

I like this positive message of “Let’s focus on the people who are to all intents and purposes trying to do the right thing and facilitate that and make sure that happens.

HILARY SUTCLIFFE: Absolutely, and really interesting.

I am going to steal my favorite line in your whole book that you said, which I really like: “If you have one person who breaks a rule, you have a people problem; if you’ve got lots of people breaking the rule, you have a rule problem.” I really like the idea that actually concentrating on the single person is wrong.

Tell us more about how you have been concentrating on what works and what does not work and looking a bit more closely at how that works.

CHRISTIAN HUNT: I need to correct that slightly. I say, “If one or two people break a rule, then you’ve probably got a people problem; if lots of people are breaking a rule, you’ve got a rule problem. When I say “one or two people” and “probably,” the simple thing I am drawing there is to say if the vast majority of people are able to comply with the rule, then that kind of suggests that the people who are not doing it do not really have a good excuse.

Now, there will always be exceptions to that. There may be people who are in a unique set of circumstances so there is a reason why they cannot comply with that rule. But, broadly speaking, we can take as behavioral feedback the fact that if lots of people are complying and one or two cannot, that suggests that those people are somehow responsible for that.

The flip side of that then is to say, “Well, actually if you’ve got lots and lots of people breaking rules it is unlikely that you are going to have hundreds of people within your organization deliberately setting out to break a rule, so there will be some underlying reason why that is happening.” Now, the rule itself might not be the problem. It might be that you trained them eight years ago and expected them to remember; maybe the rule is very difficult to comply with; maybe there is a set of circumstances in which the rule is not clear or actually should not apply because it is stopping something legitimate from happening that you had not previously considered.

What I say with these things is look at it as feedback. I got to this logic by thinking about traffic. Many of the things that occur to me is I try to look for parallel universes, other circumstances, that are not necessarily badged as “ethics” where we try to influence human decision making and maybe we have gone about it in a particular way that can help us to think about the challenges that we are facing in the ethics sphere. If you think about traffic, it occurred to me that we have two things that we do.

The first one is the speed camera mentality where we will have a stretch of road, we will put speed cameras on there, we will take pictures of drivers who are speeding, and we will punish those drivers to try to deter drivers from speeding on that stretch of road. Very sensible because drivers are the unit of risk. If you can get the driver to behave correctly, they will not speed, and therefore you will reduce the risk of accidents. So we hold people accountable for their actions.

But we also do something else, which is we say: “Look, if there is a stretch of road, a junction maybe, where there are lots of accidents and lots of issues, we do not just blame the drivers, we also think about What is it about this junction that is posing a problem?" And we then probably change the structure of the junction—put more signs up, slow traffic down; we do things to the physical environment in order to reduce the possibility of accidents. It may just be a simple case of putting up a sign saying “Watch Out – Lots of Accidents Happen Here;” but, generally speaking, when it is a big issue, we change the architecture. I thought, That is really interesting because we do not pin all of it on drivers; we recognize that maybe there are trees that are blocking signs or you cannot see around a blind corner. There will be certain cues that mean that is why. Drivers do not set out to have accidents.

As I thought about that, I thought, That is sort of interesting because the same thing applies when we think about rules, back to the idea of we have not hired a bunch of people hopefully who are deliberately setting out to break our rules—if we have, then we’ve got a recruitment issue—but, generally speaking, we haven’t got that problem.

So I thought, Why can’t we take that same idea of changing the environment and deploy that in the context of rules? If you think about it in those terms, actually then you can start to say, “Yes, of course we hold people accountable for their individual behaviors, so the speed camera mentality actually does apply; and if I discover that one person is breaking lots of rules, then that is a problem; or if people are an anomaly, where everybody else is complying with the rules, then we should look at that and we should probably approach it with the speed camera mentality. But I like this idea of here is the particularly dangerous stretch of road that we also change and we focus on that as well. That is how I came to that piece.

I think if we start to recognize that it is uncomfortable and the reason organizations do not do it is because they then have to admit that there is a little bit of organizational culpability, your conclusion is “We’ve got to change a rule, change a process, we’ve got to do something,” implicitly therefore the structures that you had in place before were contributing to the problem, and that is not a comfortable discussion.

But I am not here to make life comfortable, I am here to try to help people solve problems, and I think by recognizing that often we can design things with the best intentions that do not work out the way we expected. Being prepared to admit that and course correct I think is one of the most powerful ways that we can solve these challenges.

HILARY SUTCLIFFE: Absolutely. Really interesting.

One of the things too is I was intrigued by your job title. Quite early on you were head of behavioral science. Now behavioral science is all over the place at the moment, sometimes being used in my view very weirdly and sometimes not. But I like very much how you talked about behavioral science in your book very straightforwardly, very helpfully, so you are looking at your rules with those behavioral lenses in mind. Tell us a bit more about the practicalities of that.

CHRISTIAN HUNT: The way the job came about and the genesis of the way I think about the world in some of the work I do now was really looking and saying, “If I recognize I am trying to influence people, suddenly I can start to look at other contexts.”

Back to the traffic example I have just given, you can look at other contexts where we are influencing people—we might not call it “ethics,” we might not call it “compliance,” we might call it something completely different—we use techniques in other fields to influence humans to behave in a particular way. So, if we start to look at the behavioral challenge that we have, remove the context and suddenly you can start to spot other ways of doing it.

The first point to think about is: Is this actually delivering what I think it is delivering? One of the logical things that we often do is we say: “We need them to understand this, so we are going to send them to a training course to understand this. They have attended the training course, so they now understand this.” Now, that may or may not be true.

But what we do not think about is: Do they agree with this; do they understand the rationale behind this; have they understood the theory but not the practice of this? So attending a training course, having a bum on a seat, or having a pair of eyes staring at screens and some fingers on a keyboard does not guarantee that you have gotten through to these people. It is an easy metric to say, “Everybody has done the training,” but actually has that training been effective? Very often, it is almost “the job is done when they have shown up.”

So what I started to look at was to ask: Does this actually equip people with the things that we need? Does this work in the way that we had intended? Is that email that we are sending out sending the right signal or are they going to read it in a completely different way? Are they going to read it at all? Is this landing in the way that we want?

The challenge that we often have is that things that we think work do not actually work that effectively. The best example of that is if we look at the world of advertising, there are lots of advertisements out there that make no sense whatsoever.

Think about jingles for example. Why does my propensity to buy a product increase if I recognize the branding, the logo, the jingle? It has nothing to do with the quality of the product. There is something there that is not logical; it is working on an emotional level.

So I just thought, We can start to look at what goes on elsewhere. I am not saying we necessarily need a branding or logo or jingle, but just the idea that there are other factors at play that we can use.

So, recognizing (1) there may be better ways of doing things that are more behaviorally effective; but (2) probably more important is to look at where we have unintended consequences, where we have things happening that were not in the plan, where the theory of how we have influenced people just is not working. We can start to recognize this very easily and there are lots of things that we do.

Regulators do not help here, by the way, because often regulators ask for things that do not work well from a behavioral perspective, so we’ve got that second challenge of how we keep them happy. Or even people who, if they are sitting in the C suite, might have a view as to what the most effective way of influencing people is, but actually that is not necessarily what really does influence people.

What I started to do was to think creatively about how we can solve the problems that we have and really get through to the people and how do we know that we are being effective.

One of the interesting things is when I talk to people about behavioral science, most people ask, “How do you know that is effective?” I say, “Well, how do you know what you are doing at the moment is effective?” So we should apply the same standards to new ways of doing things as we apply to existing ways.

Typically, what we find is the only way historically that we have known whether things are working or not is when things have gone wrong. Now, that is not a great way of looking at the world because it would simply say, “I think our traffic safety program is working really well, but we will find out when we work out whether we have had any accidents.” That is not a great way of approaching the world.

What I like to try to do is say to people, “Let’s think really carefully about what outcome we are looking to achieve, what situations might that outcome occur in, and let’s think about the real world. Let’s not think about a sort of hypothetical, nice, easy-to-organize world.”

Often we send people to what I call “murder is bad” training, where we ask people the question “what do you do in this situation?” and it is a situation that nobody needs any help with because it is blindingly obvious what the right answer is.

That is not what people need help with from an ethical perspective. What they need help with is the grey area, the bits that are a bit confusing, the bits that are not obvious to them. That is what we need to focus on.

We need to recognize where people might make a misstep—not, back to my earlier point, because they are setting out to screw something up, but actually because we all have competing propositions, we all have off moments, we can be tired, we might not be thinking straight.

The one line I always say to compliance or ethics people is, “The average employee is not interested in ethics or compliance.” I do not mean they are not interested in behaving ethically or compliantly. What I mean is they are not interested in the subject matter in detail. How do I know that? Well, if they were, they would be working in the ethics or the compliance function; but they are not, they are working in sales or marketing or whatever their job is, so their primary interest and their primary focus is doing that job. All of these other things that we want them to do, yes they are important, and of course if they have bandwidth and capacity to think about these things they would think about them, but they’ve got other things to prioritize, and so we need to plug into their world and we need to meet them and think about the realities of their existence.

That is where I would say to someone in advertising, “You will not sell a product if you do not market it to people and you do not explain to them how it fits into their existence, how this is going to make their lives better.”

And I would argue we have the same challenge with ethics. If we are introducing concepts and ideas and frameworks and tools and training, let’s be realistic about the world that they are operating in and make sure that we make it as relevant as possible and we make it as helpful to them as possible—in other words, we speak their language, not ours; we communicate to them about situations that they will recognize, that will be realistic. If we start thinking in those terms, we can suddenly start to evolve programs that will be more effective.

Behavioral science for me is part of the toolkit that also includes things like creativity, it is thinking differently about how we solve the problems, because if it is not working now, doing more of the same thing is unlikely to get a different outcome.

HILARY SUTCLIFFE: Absolutely.

I like that too about—ethics training is one of those ones where you get sent on your ethics training, tick the box, let’s go. But there is lots of very creative and wonderful work being done in that now, so I am not suggesting that this is as much as it would have been a few years ago.

I think what you said there about we are so focusing on what we want them to know and then saying it that we forget about how it is received. There is a great book written by the guy who used to be the star of M*A*S*H, Alan Alda, and his book is If I Understood You Would I Have This Look on My Face? It is the job of the communicator to be understood, not the job of the person being communicated to to understand.

As you were arguing, things are not designed for people with all of our very busy foibles and conflicting incentives in mind. I really like that too.

One of the things when we are looking at the behavioral science side is it is quite difficult. I like the way you talk in detail about you have a rule and how you pick it apart and how you do not scrutinize what works and what does not work because it is just too difficult and it is too complicated.

Can you talk us through a bit how you have done that in certain situations with certain types of rules, and even any applications we have for the financial crash where rules were lacking in certain ways, and what lessons you have learned from that?

CHRISTIAN HUNT: Let’s start with something where we already have an existing attempt to solve a problem, so maybe we are trying to comply with a rule or we have an ethical outcome that we are looking at where something has gone wrong in the past and we designed something to try to correct it.

I think one of the important things to think about is to ask: What was the behavior that we are trying to change here? What is it that people are currently doing or that there is a risk of them doing that we do not want them to do, or alternatively not doing that we do want them to do, and let’s have a look at who those people might be, what the situations are in which they might do that, the thought processes that they might go through—in other words, do what we would do for customers, so we try to put ourselves in the customers’ shoes to understand lots of conversation about customer experience and the customer journey.

I think there is the same thing for employees, but we tend not to bother doing that. I talk about the employment contract fallacy, which is the idea that because we employ people we can tell them what to do. That is legally correct, we can; but it is a fallacy because it is not desperately effective if we spend our whole time effectively saying to people, “Because of the employment contract you are going to do this thing that you do not understand, and just do it.”

We can get away with a certain amount of that and we will need to use a certain amount of that in a particular context—for example, things that are matters of health and safety where that overrides everything else—where you say, “Look, this just absolutely has to happen.”

But when we are thinking about things like ethics, which is a bit more nebulous, shouting at people, mandating them to be ethical, that is fraught with difficulty, particularly where we cannot necessarily police it; we may only find out afterwards that they have been in a meeting and done something or they have taken a decision that was not one that we would have wanted.

We need their buy-in, so really looking and asking, “Can we understand the realities of what these people are doing?” A key component of that is to start reflecting on how they will see it— I do not mean how we would like them to see it; I mean genuinely how they are likely to see it.

That leads us into—and I have various frameworks that help us to think about this—questions like, for example: Do they genuinely understand what it is we are asking them to do? And, equally: Do they know that we understand what we are asking? One of the things that is really irritating is if somebody is asking me to do something and I think they have no understanding of what they are asking. That is very annoying.

We should focus on that and ask: Actually do they have a sense of why this matters to them and do they understand what it is we are trying to do? Often when you want people to do something they may need an understanding of the logic behind it, the spirit of the law as well as the letter of the law if you like. So we are looking at questions like that.

We are also looking at how easy it is for them to do because we are less likely to want to do something that is a pain in the backside than something that is easier for us.

We are looking at really focusing on the genuine experiences of the individual and their likelihood of accepting our authority in this particular field.

The example I always quote is to say, “Lots of organizations have social media policies; in other words, they impose rules on what their employees can and cannot do on social media.” Depending on what industry you are working in, you may have a different view on that. If you work for the security services, you may think it is perfectly reasonable that your employer bans you from having any social media because that could expose you to risk; but if you happen to work for an ordinary company, if they turn around and say to you, “You may not have a LinkedIn account or a Facebook account, that might feel like overreach.

Understanding that might lead us then to change what we demand of people: (1) We may say to ourselves, “Actually they are going to find that so challenging, so difficult, that is probably a step too far, so maybe we should not do that;” or (2) we may say to ourselves, “We really need this, this is really important.” So we are going to recognize that if there is a big gap between what we would like and what they are likely to do, then we need to think about how we close that gap or we need to monitor it because it poses a bigger risk.

So it is not about saying the employees should have it all their own way, but we should factor in where there is a bigger gap between the outcome we are looking for and the outcome that they are likely to produce for us, being realistic about what that looks like, and then we can start to ask: What risks are we running here? How can we best manage this particular challenge?

Often we treat all sorts of ethical principles or rules as being equally valid. I would say there are certain things that are not difficult to impose on people at all. They will completely understand why you are doing it, they will intuitively get it. They may just need a little reminder or a little pointing out of something that may not be obvious to them, but they are onboard.

There are going to be other things where they are going to say: “That is a pain in the backside. How dare they tell me to do that. I do not need this sort of thing, and that is going to restrict my ability to do business and that is going to prevent me from selling certain things,” and it causes angst and anguish. If we start to recognize those sorts of rules, requirements, and impositions on people, then we can say, “Actually that potentially poses a larger risk to our organizational efforts.

I think understanding behaviorally what people’s propensity, in the loosest sense of the word, to comply or to follow our principles is then we can start to deploy our efforts much more intelligently, and maybe we need less of the “murder is bad” training and more of the kind of—actually we are not patronizing here—“this is really difficult and you may find yourself in a position where you are under huge amounts of pressure and in the heat of the moment you might not spot this, so we are going to talk you through what you do if you find yourself in this particular situation, and let’s have some clear and really sensible advice for how you manage that.” Spending more time on those things and less time on the obvious things is a better deployment of resources.

If you looked at any other form of risk, you would manage the risk appropriately, you would deploy resources at the things that pose the greatest amount of risk to the organization. I think the way we can measure that is to ask: What is the behavioral gap between what we are looking for them to do and what we want them to do?

Of course, when it comes to ethics there are certain things where it is very, very clear what the right answer is and it is going to be consistently clear, so closer to what you might term the laws of physics where those do not change, so there will be certain ethical things that are obvious, very clear, and very straightforward.

If we looked at sanctions for example, there are certain types of business that we just will not do, and there is no question about that; therefore, that is not the kind of business that we want to do either with sanctioned parties or as an ethical principle we do not get involved with the following industries. You can be very, very clear about that and that is very easy to codify.

There are going to be other things that are a bit more nuanced and require a bit more thought, so I think we should focus more on helping people through those difficult bits rather than spending equal effort telling them stuff that is blindingly obvious.

HILARY SUTCLIFFE: Brilliant.

Actually, I am not certain in my podcast about how at the moment we seem to be trying to make people more like machines and machines more like people. I have some sort of sympathy with the fact that machines in theory just do as they are bloody told and you just do that and it happens, whereas with people it is all so difficult and so messy. What I like about your book and your approach is recognizing that, and looking at people for what they do bring, looking at machines for what they do bring, and making the most out of both.

I like, too, finding ways to understand what might go wrong, what could go wrong, in creative and interesting ways. I liked your idea of ethical hacking thinking, red teaming, and all that type of thing, practical ways of trying to find out what the risk might be. Give us some more examples of that.

CHRISTIAN HUNT: One of the simple things that I think we miss is that people give us clues ahead of time that they might do something that we do not want. If we think about when people commit fraud, people generally do not jump in and commit massive amounts of fraud as their first attempt. They will take baby steps in that direction and then, as they spot loopholes, it becomes easier for them, then the numbers go up and they start to create justifications and it becomes a habit. We know that from lots of other contexts.

One of the things that I have said is if we could start to look for the equivalent of poker tells in our population that say there are things that the population is doing that if we paid attention to them we would have an indication of where we might have down the line—or we might already have but we just have not spotted it yet—a behavioral problem.

One of the things that I think is interesting is looking at the interactions that people have with the organization and how they react to rules and regulations and training and all those sorts of things.

A good example would be we often do not capture complaints about rules or training, and the logic behind that is what is tough – “They have to attend this stuff, I do not need to, this is mandatory so they are just going to do it”—and we are back to the employment contract fallacy.

I think it is actually quite helpful to try to record some of this stuff and say, “Look, if you are getting lots of complaints about something, then in the same way that is as much worth paying attention to as when you have customers complaining about something, because that could be the reason they stop using your service or stop buying your product. The same thing is true internally: “If I really hate a rule, I think the rule is stupid, and I do not respect it, then what is the rule that I am likely to bend or break when push comes to shove? Chances are it is the one I have been complaining about.”

I am not saying it automatically means when someone is whinging about a particular thing that they will, but this is where, again thinking about the collective, if you’ve got one person who is banging on about something a lot, then maybe you can ignore that one person; but if lots of people are telling you that a rule is stupid or they do not really understand it or they find it deeply irritating, then that is a pretty good indicator. Again, it is behavioral feedback.

I also like to look at the volume of inbound questions that we are getting. Those might be questions posed to ethics or compliance officers or that might be people looking at policies and rules on the internet. We can track that stuff, and if we start to track that stuff we can ask what patterns we might be expecting.

So if we look at, for example, around the holiday season, that is a time when lots of gifts are given, so it would be logical to assume that that might be a time when people maybe want to refresh their knowledge of the gifts and entertainment policy, and so we could ask whether we are getting an uptick in that space. Now I am not drawing the conclusion that just because there is an uptick that is good news, or there is not an uptick so they all understand the rules so they are going to look at it or maybe they are looking at it because it is confusing. I do not conclude as to what the data tells us, but it is useful data that we do not generally think to capture. We ask ourselves, “What might this data be telling u?”

I would look at it and just simply ask: Are humans behaving in the way that we might expect? Then the question comes: Is it actually explainable in the sense that our preconceptions about what they ought to do turn out to be wrong because actually we have misunderstood something? Well, if we investigate that, that is useful information. Or is it that actually: Hold on a minute; there is a pattern of behavior here that just looks really odd? Well, let’s go investigate why that is.

I think if we do that we get a better understanding about what the population is up to, and therefore what makes them tick, and therefore we can design it. Even if we do not make any great discovery, we are learning more about the people who we are trying to influence and the way that they do their work. That will allow us, in the same way we get lots of data about customers—if I look at Amazon collecting tons of data about what people are looking at, where they got all that sort of thing—why don’t we do the same thing with our “services” and see what they are looking at, and that will start to give us some useful information. Now you have to be smart about how you use that and I would think about it, but there is lots of untapped information. Because most interactions are digital now, we have the option of collecting this stuff.

The great news is if we collect it on an aggregated basis—in other words, the traditional logic would be to wheedle out the individuals, but I am not interested in that. What I want to know is big-picture patterns where I can say, “This is where the herd is moving on this issue, are there lots of people doing something or not doing something,” and that gives us some early behavioral cues. If we investigate those, we start to discover things before they go wrong rather than having that presented to us neatly on a platter afterward.

HILARY SUTCLIFFE: It sort of stands to reason you would want to do that, but it is quite surprising how often that does not happen. And as you say, I like this idea of tells as leading indicators, trying to see what we should be looking at and what can be construed from that—not that you can necessarily predict big disasters from small things.

I love also your point in the book about compliance and ethics and how we ourselves wiggle our way into making perhaps slightly unethical decisions or rule decisions. You are saying about your findings of the justifications that if we want to justify doing something that breaks a rule, we tend to do so by referencing things; and, conversely, if we want to justify doing something unethical, we tend to do so by reference to compliance. That really brought it home to me. Talk to me a bit about that, this justification thing and how difficult it is for people.

CHRISTIAN HUNT: I was sort of reflecting on it and thinking, Well, one of the challenges that we have when we are looking at the field of ethics is that it is not—back to your earlier point—about programming machines. We have views on things and we think about stuff and we are very, very good at justifying our actions. I started to think about that.

I started with self-reflection actually about when I have broken rules and I say “when,” or when I have been unethical, what was I thinking? The most obvious example of this is breaking a speed limit, which anybody who has driven will at some point have broken a speed limit. The rationale that we will tend to come up with will be something along the lines of why that was okay—it was the middle of the night, so there was no one around; the rule did not really apply under these circumstances and that is why I did it.

The flip side there is if I have done something I should not have done, I can sort of point and say, “Well, there wasn’t a rule that told me I couldn’t do that.” So we see lots of people making excuses saying, “Well, if they didn’t want that, they would have created a rule against it.”

That is one of the arguments I use against fat rulebooks, but it feels better if they say, “The more rules we have the safer the world would be.” Yeah, but actually then, if you have written a really fat rulebook and you have not covered a particular thing, you have created a massive loophole because, logically speaking, if you’ve got these massive rules and there is not one covering that, then game on because you have sent a signal that you have covered everything.

What we will tend to do there is to sort of find a way that we can say, “Nobody told me I couldn’t do it, there wasn’t a rule that specifically prevented that, or we interpret the way the rule was written to allow us to justify whatever we are doing.”

So we lean on these alternative things and will do it according to whatever suits us at the time. We will grab the most convenient rationale that is there.

As I reflected on the interaction between these two things, I just thought, That tends to be it. It tends to be “I will use weaknesses in the rulebook or some sort of ethical override to get me out of whatever problem I’m in.” If you think in those terms, then you can just go: “Ummm, (1) that shows you the interaction between ethics and compliance and how those two things sit next to each other; and (2) it shows you the challenges of codification.

One of the things I think is really interesting about ethics versus compliance—I say “versus” because lots of compliance rules cover ethical principles. Particularly if there has been an issue in an industry—banking would be the best example, lots of unethical outcomes—we create some rules to prevent those in future, so we have codified ethics in that space.

The challenge is whenever we try to codify things we have to be pretty certain that we are covering every possible eventuality. I think that the challenge that we have with technology is that things that might not have been possible before will suddenly become possible, and therefore, in a very crude analysis, rules that worked in an analog world might not work in a digital world, and so we need to be thinking about those things.

Back to your point around humans and machines, the more we try to codify stuff we have to recognize that in codifying it we may be creating loopholes, we may also be missing gaps, and there may be things there that are not codifiable or if we were to codify them they would work beautifully in certain circumstances and not in others. So how do we manage that particular challenge?

HILARY SUTCLIFFE: Great. Your self-justification point is great because one of the things we see when risks become disasters is cognitive dissonance, the ability to justify ourselves as good people and contort ourselves into some form of things that make us know this was not our fault and we are good people. There is a self-justification book I read. I must dig it out and get that person on because I loved that.

Also, all of the risk about willful blindness and about conflicting objectives, that is all human nature. And, back to your behavioral science, more empathy and understanding and listening around why we do the way we do will help to support us to do things right, I think is the only way through for those types of things, isn’t it?

CHRISTIAN HUNT: Yes, and I think particularly if you look at why we are employing people in the 21st century—back to my earlier comment about giving predictable tasks to machines—we are hiring people to do things where these issues are more likely to be prevalent.

There is a brilliant Lego ad campaign from about three years ago that I love showing when I do presentations. For International Women’s Day in the 1980s Lego ran a series of ads with two little girls on it, and they were basically presenting implicitly to the parents some sort of model. It said something along the lines of “what it is is beautiful.” Anyone who has kids will know this. It was sort of the message of “If it’s their creation isn’t it wonderful?” They produced these ads in the 1980s, which was quite forward-thinking for that time.

A couple of years ago they updated it and made it with a much more diverse cast of young girls and they used similarly interesting words. They almost copied the same language: “What it is is creative, what it is is innovative, what it is inventive, what it is is experimental.”

I show this one, first, because I think it is a really cool ad; but the second thing is to say, “Those words are exactly what we are hiring people for.” We know that because have a look at job specs out there. They do not say, “Join us to slavishly follow the rules and do exactly what you are told” because nobody would go and work in that organization. Even the military does not do that.

We know through the interview processes that we put them through that we are hiring people for human attributes. As we look at diversity as really important to avoid group-think, if we want an organization to be creative, to serve the needs of a broader range of the population, all of those sorts of things, we need a more diverse set of people for that, so we are bringing in these diversities of perspectives.

I think one of the challenges that we’ve got then is that we will have a lot more humans to think about, a lot more perspectives to reflect on, and lots of our perspectives about what is ethical come from our own experiences and the way we were brought up. When we look at that, it becomes a more challenging piece.

But if you are hiring all these people to do these very human things when we come along and then try to control that, that is running against the grain of the reason that they were hired, and if you are hiring people to be creative and then suddenly say, “Oh, don’t be too creative though,” it is running in exact opposite tension of where we are at.

I think one of the challenges that we are going to start to see going forward is that ethics, in the same way as compliance, may feel like a little bit of a handbrake that is being applied to the natural flow of business. That is why I think one of the challenges that we have is really to recognize that if we try to codify too much of ethics and do not think about unintended consequences and do not recognize that what we are actually doing is experimenting with influencing people, any attempt to influence people is by definition an experiment.

Marketing gets this because you do not put an ad campaign out there without having tested it on an audience. And you are not allowed to spend money in Ad Land without having some good rationale. “I want to spend a million on this ad campaign.” Well, what evidence do you have this is going to work? And if it does not work, you pull the campaign pretty quickly.

We do not seem to do that for some reason in the ethics space because what we sort of say is, “Well, yeah we need to get people to be ethical” and you do not have to think about: How is this working? The monitoring tends to be: Are they doing what we want them to do? We do not do what I talked about before, thinking Have we designed this in the most effective way?

I think an experimental mindset in the ethics and compliance space is absolutely critical for two reasons.

One is it puts you in harmony with what the business is doing. As businesses evolve they are experimenting and doing things, so we need to be adopting a similar mindset for what we do. I think R&D is incredibly important in the compliance and ethics space.

The second thing is that then means that if we have it wrong or we need to course correct we are ready to do that; and we avoid some of those human biases you were talking about, the presumption that “We have organized this perfect system and now those idiot humans are not doing what we want.” We can then say: “Hold on a minute. We are now getting enough feedback that there must be something wrong in what we are doing. Let’s course correct before something goes hideously wrong.”

So recognizing that any attempt to influence people is (1) an experiment, but (2) will come with what we call in medical terms side effects, stuff that we did not want to have happen. Now, side effects are nonsense. It is just a term that we use to justify the stuff that we do not want. They are effects in the same way that the design outcome is an effect. We should recognize that they are happening.

One of the things I always say to my clients is: “If we introduce something that is a behavioral intervention, an attempt to influence people in some way, shape, or form, think about the unintended consequences. Be cynical about this and ask, ‘What is it that this could unleash that we do not want?’ The answer is not ‘nothing’ because on some level there will be something that is there. And recognize if it is significant; and, if it is so significant, do not do it. But even if it is not significant, monitor for that, look at it and say, ‘We knew that this was a potential thing that might come up and are there any signs of that?’ If you know what you are looking for to start with, you will have a much better opportunity of catching it if it does start to become an issue.”

HILARY SUTCLIFFE: You have come around full circle and are talking very much about techy things here, this idea of unintended consequences and consequence scanning and all of those types of things. It is just so important. When we have seen when things go wrong—whether it is Chernobyl, whether it is something to do with the Big Tech issues of social media—(a) somebody has always been telling you somewhere, if you were prepared to listen to them; but (b) also you cannot do this without being there on the ground listening to people. And the unintended consequences are usually not tech consequences, they are human consequences.

So you come right back to your initial premise, which is that humanizing rules, humanizing organizations, and understanding the way people work and react and interact with the processes and the individuals within the organization is absolutely essential in terms of understanding unintended consequences and heading off ethical issues before they become disasters. That is a fantastic wrap-up you have done for us there.

Just one final thing, Christian. If you were to say to our audience—you have worked for very complicated organizations, very complicated businesses—what one thing would you say about humanizing rules? What one tip would you give us to sort of help us on our way?

CHRISTIAN HUNT: I think the number-one rule is think about things from the perspective of the people whom you are trying to influence. And yes, if you have a diverse population, different businesses, different locations, this is going to be more challenging, but I think starting to think about things from the perspective of your target audience, your employees, is critical.

In doing that I will come back to my number-one mantra: Do not think about what you would like people to do; think about what they are likely to do. Be realistic about what real humans in the real world out there are likely to do. Therefore, there is no place for wishful thinking in this. We need to really start to understand that.

If we do that, yes we get overwhelmed by all the things that we need to think about, but we will start to think in the right way because we are starting to think from their perspective, and they are the people that we need. I always say organizations cannot be compliant or ethical of their own accord; it is the people within it that will determine that. We need to be thinking about things from their perspective. The more we do that the more we are likely to be effective.

HILARY SUTCLIFFE: Brilliant. Fantastic. What a great end for this. Thank you very much, Christian Hunt.

And I would actually like to give a plug for Christian’s own podcast. He has The Human Risk Podcast. He has a phenomenal number of guests. I have no idea how you do all this and all the work you do already. If you want to learn more about that, hop over to Christian’s own podcast as well.

Thank you very much, Christian Hunt.

CHRISTIAN HUNT: Thank you so much for having me.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

NOV 2, 2023 Podcast

Trustworthy Tech Development, with Julie Dawson

In this episode, host Hilary Sutcliffe explores the practicalities of how a company can provide evidence of trustworthiness with Yoti's Julie Dawson.

OCT 19, 2023 Podcast

Technological Progress, with Simon Johnson

In this episode, host Hilary Sutcliffe explores technological progress from another angle with MIT's Simon Johnson. Does technology really make our lives better?

JUN 13, 2023 Podcast

Accidents, with Jessie Singer

In this episode, journalist Jessie Singer challenges our conventional thinking on accidents, discussing why the majority of accidents are predictable and preventable.