Mapping AI & Equality, Part 4: Justice, Fairness, & Equal Opportunity

Feb 10, 2022

In mapping AI and equality, it is helpful to develop broad categories that highlight distinct trajectories showing how AI impacts people and their relationships with each other, and with our shared environment. This series of blog posts aims to provide fodder for further thought and reflection.

To date, much attention has been directed at ways in which AI can reinforce existing forms of bias including those around race, gender, and disabilities that enable unjust and unfair treatment and limit opportunities. By bias we mean prejudices or distortions of facts that place an individual or group at a disadvantage or rendered powerless. Justice and fairness (often used interchangeably) can be broadly understood as giving each person his or her due without any bias.

Unfortunately, existing inequities of gender, race, ethnicity, education, and class are often baked into the design and deployment of AI applications. For example, biases inherent in human societies and processes are often reproduced in the existing data troves being analyzed, and therefore will be reflected in the algorithm’s output and the way the system functions. The attempt to use algorithms for sentencing individuals convicted of crimes has been demonstrated to reproduce and reinforce existing biases such as those identified with race and social class (Artificial Intelligence, Justice, & Rule of Law, AIEI Podcast with Renee Cummings).

While algorithmic biases are now well recognized, the investigation of effective means to address and ameliorate their impact has only begun. AI ethicists are particularly concerned about the deployment of biased and inaccurate AI tools in fields where societally impactful harms might be caused. The use in law enforcement of facial recognition software, software that matches datapoints from a picture of one face against a database of faces, provides a significant percentage of false positives, often to the detriment of the less advantaged among us. (Joy Buolamwini, Timnit Gebru,  Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91, 2018. Also Anita Chabria,Facial recognition software mistook 1 in 5 California lawmakers for criminals, says ACLU,” LA Times, 2019.). The State of California has therefore prohibited law enforcement agencies and officers from installing activating and using any form of biometric surveillance systems. (https://californiaglobe.com/articles/governor-newsom-signs-bill-banning-facial-recognition-technology-in-police-body-cameras/) Unfortunately, the use of biased algorithms in both the public and private sector continues and often goes unrecognized.

Nevertheless, there are countless examples as to how present and future AI-driven applications can reduce inequality and poverty. PULA is an insurance company in Africa serving smallholder farmers. The challenge PULA addresses is the fact that drought leads to crop failures in Africa once in every decade or so, leaving many farmers not only destitute but also unable to purchase seed or fertilizer for the next planting season. PULA has worked with seed and fertilizer companies to bundle an insurance policy with each purchase. Farmers register their policy by cellphone.

Cloud cover is then analyzed by PULA using deep learning algorithms to determine which regions have received insufficient rainfall to ensure healthy crops. Farmers within those regions automatically receive a certificate to redeem seed and fertilizer for the next planting season. With the ubiquity of cellphones and the advent of deep learning algorithms, PULA solved the problem of providing low-cost insurance to poor farmers who were uninsurable by traditional means. As of this writing PULA claims to have insured 5.1 million farmers and processed roughly 39 million claims, with these numbers continuing to rise.

Stories such as PULA are truly heartening and are often cited to support the widespread deployment of AI applications even when they may reproduce biases or undermine privacy. To date, it remains unclear whether the positive ways in which AI applications, old and anticipated, can ameliorate inequities, truly justify the many ways in which AI can exacerbate inequities.

Image by Gerd Altmann from Pixabay

Anja Kaspersen is a Senior Fellow at Carnegie Council of Ethics in International Affairs. She is the former Director of the United Nations Office for Disarmament Affairs in Geneva and Deputy Secretary General of the Conference on Disarmament. Previously, she held the role as the head of strategic engagement and new technologies at the International Committee of the Red Cross (ICRC).

Wendell Wallach is a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics. He is also a scholar with the Lincoln Center for Applied Ethics, a fellow at the Institute for Ethics & Emerging Technology, and a senior advisor to The Hastings Center.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.