Mapping AI & Equality, Part 2: Defining AI & Equality

Jan 27, 2022

In mapping AI and equality, it is helpful to develop broad categories that highlight distinct trajectories showing how AI impacts people and their relationships with each other, and with our shared environment. This series of blog posts aims to provide fodder for further thought and reflection.

Artificial Intelligence is a contested term. Generally, it refers to the simulation of human cognitive capabilities (discrete forms of intelligence) by machines. However, there has been disagreements as to which capabilities are truly expressions of intelligence.   

The process or set of rules used by a computer to perform a task is referred to as an algorithm.

Many processes (algorithms) entail mechanical or purely mathematical steps, while other processes require higher-order mental faculties such as the capacity to learn or plan. But throughout the short history of computing some essentially mechanistic tasks performed by computers have been labelled artificial intelligence. For example, the task of searching for a word or phrase in a large database or on the Internet is sometimes referred to as AI, even though this is generally recognized as a largely mechanical activity that requires no special intelligence on the part of the computer. Under this usage AI applications have been around for decades.  

However, many AI purists, such as Stuart Russell, co-author of the leading textbook on the subject, wish to reserve the term AI for higher-order cognitive capabilities (Russell, Stuart, 2019, Human Compatible, Generic) such as learning, planning, problem solving, and an accurate understanding of human speech.  The ability to make decisions autonomously (with little or no direct human involvement) has also been considered an essential feature in some definitions of AI.  For example, The High-Level Group on AI established by the European Commission in June 2018 made autonomous decision-making an element of their definition of an AI system, which it defines as a machine “designed by humans that, given a complex goal, act in the physical or digital dimension . . .  the best action(s) to take to achieve the given goal” (Independent High-Level Expert Group on Artificial Intelligence, 2019).   

The history of research in AI has been characterized by periods of great enthusiasm, or  AI summers, each generated by an exciting new approach followed by an AI winter when the approach fails to lead to the anticipated breakthroughs.  However, the advent of deep learning algorithms, combined with the advanced hardware that has sufficient power to execute them, is producing significant and continuing breakthroughs that have put the field of AI research on solid footing.  The evolution of AI has not been linear and its transformative impact on the human condition is still playing out across a spectrum of cultural differences which cannot be neatly captured in simple or statistical models.  

Deep learning systems analyze large databases of information through multiple layers of processing. Deep learning algorithms have given rise to new approaches to tackle formerly intractable problems, as well as efficiencies and productivity gains, by sorting through libraries of data searching for and extrapolating salient relationships that might otherwise be impossible to find. This is a blessing for mining vast quantities of research data, such as that on any complex medical condition, to reveal correlations worthy of experimental investigation.   

Deep learning is a subset of machine learning. As mentioned earlier, computers are far from having the general learning capabilities of even young children. Nevertheless, the learning techniques that have been computerized, in combination with other computerized processes, have outperformed human experts on a variety of tasks. For example, researchers were bedeviled in solving the three-dimensional structure of complex proteins, what is known as the protein folding problem. In November 2020, DeepMind (a division  of Aphabet/Google) announced AlphaFold, an AI application that successfully described the structure of many proteins. Models for the structure of more than 365,000 proteins accompanied the opening of anAlphaFold Protein Structure Database introduced in July 2020 (Nature July 22, 2020). 

A rich ongoing debate exists as to which processes are truly worthy of being labeled intelligence and which appear human like but are mechanical in nature. In turn, that debate informs a broader conversation as to which capabilities are likely to be computerized and which may (or should) remain uniquely human. That latter conversation is largely beyond the purposes of this article, except to the extent that the discussion is sometimes framed in ways that devalue human abilities and degrade human dignity in comparison to existing or future computational systems. In other words, people are already or will be considered unequal to machines in certain respects.   

The distinct AI elements in a computerized system may only comprise a feature or two for more complicated applications, and in complex adaptive system such as financial or derivative markets in which humans and computer systems interact.  

For the Carnegie Council’s AI & Equality Initiative we do not always distinguish between those elements of a system that have artificial intelligence, those algorithms which do not deserve that label, and the tasks that are performed by people. For example, the digital economy, which plays a key role in contemporary issues of equality and inequality is composed of all three. However, the distinctly AI elements are becoming increasingly important in the digital economy’s overall growth, efficiency, stability, and security.  In mapping the affects of computerized tasks on the digital economy we sometimes refer to “algorithmic and AI systems” as a way of nuancing but also bypassing more laborious discussions as to which elements are AI and which are not. 

Some of the impacts of algorithmic and AI processes on issues of equality are intrinsic to the processes themselves, others derive from the databases upon which the algorithms are trained and the databases they analyze, and finally there are the tools and techniques humans elect to deploy.  In mapping the ways in which AI impacts issues of equality we try to distinguish the import of the algorithms from features intrinsic to the database, or the tools selected by people to serve distinctly institutional or individual goals.   

Just as there are many forms of intelligence from skill at higher level mathematics to creative genius in jazz improvisation, there are many forms of inequality from economic inequality, to unequal opportunity, to unfair or unjust treatment in comparison to preferential treatment given to others.  Equality is not an ideal or goal in all respects. Individuals are born with different proclivities and hone different skills. People have access to or create differing economic resources and generally this is tolerated, if not always embraced.  Our concern lies with mapping the various ways AI or the digital economy, of which AI has become a key driver, impact an array of issues around equality or inequality, often with little scientific scrutiny or meaningful public discourse.  

Consider the goal of equal opportunity, leveling the playing field so that race, gender or other cultural factors to not undermine access to education, jobs, or housing. Increasingly, lack of access to computers and the Internet can by itself put individuals and communities at a disadvantage. While providing access and digital literacy has become a central goal for UNESCO and for individual states and cities in the fight against poverty, this is not explicitly a form of inequality where AI plays a significant role. 

Anja Kaspersen is a Senior Fellow at Carnegie Council of Ethics in International Affairs. She is the former Director of the United Nations Office for Disarmament Affairs in Geneva and Deputy Secretary General of the Conference on Disarmament. Previously, she held the role as the head of strategic engagement and new technologies at the International Committee of the Red Cross (ICRC).

Wendell Wallach is a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics. He is also a scholar with the Lincoln Center for Applied Ethics, a fellow at the Institute for Ethics & Emerging Technology, and a senior advisor to The Hastings Center.

You may also like

JUN 3, 2024 Podcast

The Intersection of AI, Ethics, & Humanity, with Wendell Wallach

In this wide-ranging discussion, Carnegie Council fellows Samantha Hubner & Wendell Wallach discuss how thinking about the history of machine ethics can inform responsible AI development.

MAY 15, 2024 Podcast

Beneficial AI: Moving Beyond Risks, with Raja Chatila

In this episode of the "AIEI" podcast, Senior Fellow Anja Kaspersen engages with Sorbonne University's Raja Chatila, exploring the integration of robotics, AI, and ethics.

MAY 9, 2024 Podcast

The State of AI Safety in China, with Kwan Yee Ng & Brian Tse

In this "AIEI" podcast, Carnegie-Uehiro Fellow Wendell Wallach speaks with Concordia AI's Kwan Yee Ng & Brian Tse about coordinating emerging tech governance across the world.