If “trust is a must,” three practical things that AI regulators need to do

Jan 27, 2022

“On artificial intelligence, trust is a must, not a nice-to-have,” proclaimed Margrethe Vestager, the European Commission’s digital chief, in a statement launching the long-awaited European Union rules on artificial intelligence (AI). She is right. The new rules and actions the EU has proposed will be fundamental to earning the trust of citizens through ensuring that equality, ethics, and human rights are upheld in the ways that AI is used. But it is not only the technologies which need to be seen as trustworthy; the system of governance itself, and the regulators applying it, need also to earn public trust.

The increasing weight that the citizens give to governance as the basis for trust has been seen in attitudes to COVID-19 vaccine governance across the world, where the trust in the approvals process mattered as much as trust in the vaccines themselves in combatting vaccine hesitancy.

Its importance in the digital space was reinforced by the UK government’s Centre for Data Ethics, which found in its COVID-19 Repository & Public Attitudes 2020 Review that “trust in the rules and regulations governing technology is the single biggest predictor of whether someone believes that digital technology has a role to play in the COVID-19 response. This trust in governance was substantially more predictive than attitudinal variables such as people’s level of concern about the pandemic, or belief that the technology would be effective; and demographic variables such as age and education.”

But what do regulators across the world need to do to be seen as trustworthy and so earn public trust in their approach? Three critical factors identified in TIGTech’s recent research into Trust and Technology Governance may help:

1. Ensure effective enforcement

Our research shows that citizens trust governance most when they can see it is working—when governance institutions visibly stand up for the public interest, values are upheld, laws are enforced, organizations are penalized, breaches are published. The public are most likely to lose trust where they perceive that regulators are more concerned with smoothing the path of tech development and prioritizing financial concerns over ethics, societal values, and human rights.

The proposed EU AI Act is genuinely innovative in trying to tread the line between promoting innovation and upholding European values through identifying clear "unacceptable” and "high risk” applications, which require special attention, as compared to a green light for less contentious areas. But commentators such as Access Now, a coalition of over 110 civil society organizations, call for much more transparency about the criteria for allocating risk and the providers and users of AI, clearer accountability for those in breach, and meaningful rights of redress for people impacted by AI. These will contribute to the evidence that the Act is trustworthy.

The main enforcement mechanism is the imposition of significant financial penalties on tech companies, up to 6 percent of global turnover, with assessment largely down to self-regulation. Will this be enough? Historically even large financial penalties are often factored in as “a cost of doing business,” and self-regulation often leaves behaviors largely unchanged.

As this new legal framework is implemented, member states may need to consider other, more innovative and effective approaches than simply fines to ensure this regulation does its job in holding companies to account and so to earn the trust of citizens.

2. Be more human, open, communicative

People feel more confident about regulation when they know more about who is in charge. In the UK, for example, 82 percent of people felt more protected when they have heard of the regulator and 67 percent would like to know more about what regulators do, according to the PA Consulting report Rethinking Regulators.

Under this new legal framework, member states will be required to appoint one or more national competent authorities to enforce the regulation at the national level. It is important to trust that they widely publicize who these bodies are and what they are responsible for and encourage them to open up much more about how they work, what they do, and how their approach is taking effect. Though uncomfortable for some, it is particularly helpful when regulator representatives are named and visible and get out there in the community and media talking about what they do and how it is working, and when it is not.

“Be less aloof, more open, more human” is a recurring theme in citizen dialogues exploring trust in regulators; they should start with their websites. It is surely no coincidence that the UK’s two most trusted regulators—the Food Standards Agency and Human Fertilisation and Embryology Authority—have the most accessible and informative website. They are written in plain language, make clear what they do, and give evidence that they are open, accessible, and inclusive in their approaches to their roles.

3. Empower us and develop inclusive relationships with citizens

The EU AI Act does not allow for the embedding of the views and values of citizens and those ultimately affected by AI in shaping its remit. Neither, as the UK Ada Lovelace Institute proposals for strengthening the AI Act identify, does it allow in any substantial way individuals to complain about or challenge AI systems for breaches of fundamental rights.

Our work on Trust and Tech Governance shows that giving agency to citizens to shape complex ethical decisions and regulation is an important driver of trust. This is not only because, as the research highlights, citizens “are more likely to trust a decision that has been influenced by ordinary people than one made solely by government or behind closed doors”; or even because the more diversity of perspectives incorporated into decision-making the wiser the judgements, but because the involvement and empowerment of citizens gives more democratic legitimacy and so perceived trustworthiness to the governance design process.

If trust is really a must, then regulators of AI in the EU and beyond may need to develop a new, more inclusive relationship with citizens; involving them directly or through impartial intermediaries in the complex ethical judgements that will need to be made to have AI work for us all, without causing more problems than it solves. An example of this in action may be found in the UK with the Citizen Biometrics Council, which will directly inform policy and governance of facial recognition technologies in the UK.

These three important considerations will go a significant way towards earning citizen trust in the governance of AI and potentially then their trust in its many diverse applications. They also shift the role of regulators, moving them, as PA Consulting proposes, from being “Watchdogs of Industry, to Champions of the Public.” The Commission’s new rules are paving the way for this role, but they will only be effective if regulators in member states and beyond succeed in upholding the public interest through effective enforcement and meaningful engagement to help AI fulfil its potential for societal good.


Hilary Sutcliffe, SocietyInside, director of the Trust in Tech Governance initiative, www.tigtech.org

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.