Mapping AI & Equality, Part 3: AI’s role in altering the human condition and what it means to be human

Feb 3, 2022

In mapping AI and equality, it is helpful to develop broad categories that highlight distinct trajectories showing how AI impacts people and their relationships with each other, and with our shared environment. This series of blog posts aims to provide fodder for further thought and reflection.

The current AI discourse revolves around a core question: Will the human condition be improved through AI, or will AI transform the human condition in ways that undermine the basic tenets which enable a semblence of human cooperation? If both are true to some degree, how can we manage the tradeoffs, enhance the benefits, and limit potential harms to humans, society, and the environment? Furthermore, what will it mean to be human in an age constantly being transformed by technological possibilities? There are many reasons as to why these questions have become so central.

The loss of a job undermines a sense of worth and meaning for many and is just one example of how automation and AI systems alter the human condition and sometimes degrade the intrinsic value of people. This creates a tension between the increasing capabilities of AI systems and the loss of human agency. In other words, people in general may be treated as unequal or inferior to machines.

People’s attitudes, behavior, and sense of self have already been altered by aggressive and micro-targeted advertising and digital propaganda that is designed to capture one’s attention and manipulate thoughts and behavior. Propaganda and advertising are not new but advances in the cognitive sciences have provided cues as to how the manipulation of behavior can be turned into an art form. The historian and best-selling author Yuval Harari often notes that with tens of thousands and millions of data points, the algorithm knows us better than we know ourselves. Corporations and other entities are being empowered through these new technologies to manipulate individuals, and the manipulated individuals are losing their agency. In the name of freedom, a relatively small fraction of citizens, marketers, and political ideologues, are empowered and given license by the sheer availability of the tools to manipulate the rest of the population with little or no constraints. AI and algorithmic technologies increasingly shape how, what, why, and where we consume, what we read, what we listen to, who, and what we vote for. The digital transformation of individuals and communities into mere statistics, ratings, and data points could potentially enable a new kind of accidental digital totalitarianism.

Digital manipulation of behavior is just one dimension of the way in which the human condition is being altered. Our culture and patterns of understanding are being rewired, as are brains and daily habits. Do we even understand what is taking place and how our cognitive faculties are continually being altered?

The human condition, not to be confused with human nature, is a broad term used to describe all the elements of human existence and what it means to be human. The human condition includes characteristics natural to all humans, but also looks at the externalities and events individuals face, and any moral or ethical conundrums they might address. It refers to what humans do with our innate characteristics, how we use them to shape the world around us, and how, in turn, that world shapes us.

Contemplating the human condition gave rise to natural philosophy (natural science) at the dawn of the Enlightenment hundreds of years ago and has guided studies in philosophy, art, politics, religion, and more recently cognitive science and computer science. We contend that without an informed and open discussion about the impact of AI on the human condition, we are all at peril.

The techno-political narrative presumes and strengthens a belief in determinism at the expense of a meaningful free will, where human agency and intentions can prevail. This flawed narrative is based upon a kind of scientific reductionism that undermines more holistic perspectives. The risk is that ethics becomes reduced to a mechanistic utilitarianism, where all the computer has to do is solve difficult trolly car problems. This last point is important for it suggests ethical decision-making can be instantiated algorithmically (Wallach, Wendell and Allen, Colin, 2009. Moral Machines: Teaching Robots Right From Wrong, Oxford University Press, NY).

AI systems are by default (and sometimes by design) not transparent; that is, even the designers of the system cannot fully understand the steps that lead up to a specific output. Nor can the system explain to its human designers and users the reasoning underpinning its output. If companies and engineers do not understand the activities of an autonomous systems or cannot control them, they should be declared unacceptable or highly regulated for situations where they may cause harm.

And yet there are increasing claims that autonomous systems will perform or make better judgments than people, and therefore their use will balance out any negative impact. For example, it is claimed, but certainly not proven, that autonomous vehicles will have many fewer accidents than human drivers. Even if self-driving cars are safer, occasionally one could harm and kill people under circumstances that an attentive driver would not. Should we, society, accept such risks if the net benefits of autonomous cars are positive? Might the acceptance of self-driving cars translate into an automatic acceptance of other autonomous systems, for example, in warfare, whose benefits and risks remain less clear? These are questions that should be widely discussed but are not getting adequate informed attention.

Claims that AI systems will make better decisions than their human counterparts are often buttressed by examples of how people are subject to both prejudicial biases and cognitive biases. A cognitive bias is a systematic error in judgment. Since the first such cognitive bias was demonstrated empirically in the research of Amos Tversky and Daniel Kahneman (Kahneman, D. and Tversky, A. 1979. Prospect Theory: An analysis of decision under risk, Econometrica, Vol 47, No. 2, pp.. 263-293. Also: Kahneman, Daniel, 2011. Thinking Fast and Slow, Farrar, Straus, and Giroux), many similar cognitive errors have been revealed. These range from an intuitive yet systematic misunderstanding of statistics to a tendency to ignore information that fails to confirm what one already believes. Some futurists argue that evolutionarily bequeathed flaws in human nature such as cognitive biases justify genetic engineering, research on AI, and linking brains to AI systems, as well as other forms of enhancement to transcend human limitations and continue evolution by technological means.

The claim that autonomous systems will be safer and more accurate is increasingly being used by manufacturers as an argument that they should be exempt from certain forms of liability, particularly for low-probability events they could not predict. Determining what could or should have been foreseen is another matter. Certainly, AI systems can be designed that are free from logical errors people commonly make, but AI systems may exhibit other biases, or fail to be sensitive to and factor in salient information a human would be aware of. Currently, common sense reasoning, semantic understanding, self-awareness, moral intelligence, and emotional intelligence and empathy are among the capabilities that engineers do not know how to embed in an AI system. There are many theories as to how such forms of intelligence can be eventually instantiated, but few proofs of concept.

Current discussions of algorithmic biases clearly indicates that computers cannot be fully relied upon for good decision-making. We do acknowledge that in some situations present and future AI systems may make better recommendations than people with deep prejudices, those who are unaware of their cognitive biases, or players whose drive for personal ambition overrides good judgment. Developers will increasingly find ways to minimize biased computer input and output. Nevertheless, we contend the best decision-making will evaluate computer output and combine it with oversight and insights from operators and experts.

Unfortunately, contentions that AI systems will make better decisions than people already undermine human agency and self-confidence while justifying the turning of critical decisions over to machines. Regardless of whether AI systems supersede human intelligence in the future, there exists an ongoing dilution of human agency and an abrogation of responsibility and authority to machines. In the near term, this serves the interest of corporations who do not want to be held responsible for this science overreach and the actions of the systems they deploy.


Anja Kaspersen is a Senior Fellow at Carnegie Council of Ethics in International Affairs. She is the former Director of the United Nations Office for Disarmament Affairs in Geneva and Deputy Secretary General of the Conference on Disarmament. Previously, she held the role as the head of strategic engagement and new technologies at the International Committee of the Red Cross (ICRC).

Wendell Wallach is a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics. He is also a scholar with the Lincoln Center for Applied Ethics, a fellow at the Institute for Ethics & Emerging Technology, and a senior advisor to The Hastings Center.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.