Ethics and Fairness in AI: Who Makes the Rules

Dec 23, 2021

In a September 2021 talk at Sparks! The Serendipity Forum at CERN, Anja Kaspersen, senior fellow for Carnegie Council’s Artificial Intelligence & Equality Initiative (AIEI), discusses issues relating to artificial intelligence and power. As humans continue to proliferate and evolve AI systems, she asks, who has agency and influence in this new world? Kaspersen says that it is necessary to understand that humans are in control, avoid technologically deterministic narratives that essentially serve co-opting interests and disengagement, and to instead think more deeply about why and how AI systems might be perpetuating inequalities or creating new ones.

While it is clear that artificial intelligence (AI) systems offer opportunities across various domains and contexts, what amounts to a responsible perspective on their ethics and governance is yet to be realized and few initiatives to date have accomplished any real impact in modulating its effects.

AI systems and algorithmic technologies are being embedded and scaled far more quickly than the maturity of the technology that currently supports it – and existing governance frameworks are still evolving.

Why are we not getting it right?

Allow me to share a few select reflections and debunk a few myths – and leave you with some questions to ask as the impact of AI based systems are permeating our lives:

First: Too often the debate on AI is presented as the technology will evolve on its own separate trajectory.

This is not the case

It is human all the way. Every novelty, every challenge, every bias, every value, every opportunity. We are at an inflection point, but it is a human and to some degree a political or even a teleological one, not a technological one.

Second: Many of the existing dialogues are too narrow and fail to understand the subtleties and life cycle of AI systems and their (sometimes limited functionality) and impact. Moreover, the scientific methods and complexity underpinning the what, how, and why of how an AI system is built is poorly understood by decision-makers. We need better scientific and anthropological intelligence, both when building foundational models but also in how we think about future applications of artificial intelligence.

Third: The debate focuses on some aspects of ethics, while ignoring other aspects that are more fundamental and challenging. For example: What potential might AI systems have to perpetuate existing inequalities and create new ones?

This is the problem known as “ethics washing” – creating a superficially reassuring but illusory sense that ethical issues are being adequately addressed, to justify pressing forward with systems that end up deepening current patterns.

Fourth: The overfitting, overpromising, overpremising, and underdelivering trap. In a variety of sensitive systems, from health care to employment to justice to defense, we are rolling out AI systems that may be brilliant at identifying correlations but do not understand causation or consequences. This carries with it significant risks, especially when deployed into politically fragile contexts or when deployed in the public sector.

Fifth: Discussions on AI and ethics are still largely confined to the ivory tower. More informed public discourse and serious investment in civic education, digital literacy, diverse participation, and transdisciplinary pursuits around the societal impact of the bio-digital revolution is necessary.

Sixth: Too much of what the general public currently perceives about AI comes from sci-fi tropes and blockbuster movies. We need better ways to communicate to the public that, beyond the hypothetical risks of future AI, there are real and imminent limitations posed by why and how we embed AI systems that currently shape everyone’s daily lives – and who gets to decide.

Seventh: The public discourse on AI is not sufficiently addressing downstream consequences and is too often focused on a myopic tech-solutionist optimization approach, rather than what the problem requires. I call this the AI pixie dust problem. This includes how to reduce any potential environmental impact from resource intensive computations, issues concerning data ownership, to how do we cultivate talent, build digital literacy and AI fluency, and create shared spaces and vernacular to share insights.

Specific hard questions rarely enter the public discourse despite the importance of safely embedding AI systems into the public realm. These include concerns about the interrupt-ability of AI systems: How and can we truly design and build both fail-safe and fail-secure mechanisms into AI systems, so they safely interrupt or protect themselves if an unexpected change in circumstances leads to them causing harm? The jury seems to still be out on this one, and regulations remain embryonic at best.

Interoperability is another underappreciated problem: From autonomous vehicles to financial technologies and defense applications, no matter how well-designed an individual system may be, there is potential for unanticipated consequences if it is not able to operate smoothly with other systems.

Additionally, the reach of AI systems into every aspect of daily life was dramatically accelerated by pandemic lockdowns that shifted more social and economic activities into the digital world. Leading technology companies now have effective control of many public services and digital infrastructures through procurement or outsourcing schemes. Governments and health care providers deployed AI systems for proximity tracking and tracing applications and bioinformatic responses at an unprecedented scale. This has triggered a new economic sector organized around the flow of biodata.

Another consideration of concern is that the people who are most vulnerable to negative impacts from AI are also least likely to be able to join the conversation, either because they have no digital access, or their lack of digital literacy makes them ripe for exploitation. Such vulnerable groups are often theoretically included in discussions, but not empowered to take a meaningful part in making decisions. This engineered inequity alongside human biases risk amplifying otherness and othering through neglect, exclusion, and mis- and disinformation, with significant consequences.

More insidious perhaps is the wider debate on intentionality versus unintentionality of the effects of technology (going beyond traditional discussions of dual use). Often this debate features claims of technology and AI being apolitical in nature or that there is no way of fully predicting the impact once it is used. This is interestingly a view that is particularly prominent among non-technical people, not comprehending that every technological creation embeds values, and values reflect culture, politics, and historical patterns of behavior and economic realities.

In my view, it is time to return to the drawing board and address how to translate principles into practice. To do this, we need to:

  1. Grapple with our blind spots.
  2. Ask ourselves: Do we have the right platforms, and the right people leading them? Have we empowered people to engage with these platforms meaningfully?
  3. Learn from the CERN experience that collaboration can provide very effective checks and balances for responsive and responsible science and technology.
  4. Be clear on what aspects of ethics we are considering, and which are being overlooked.
  5. Be aware of whose rules, values, and interests we are embedding into our technologies.


Anja Kaspersen is a Senior Fellow at Carnegie Council of Ethics in International Affairs. She is the former Director of the United Nations Office for Disarmament Affairs in Geneva and Deputy Secretary General of the Conference on Disarmament. Previously, she held the role as the head of strategic engagement and new technologies at the International Committee of the Red Cross (ICRC).

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.