Artificial Intelligence & Equality Initiative

Can AI be deployed in ways that enhance equality, or will AI systems exacerbate structural inequalities and create new inequities?

The Artificial Intelligence & Equality Initiative (AIEI) is an innovative impact-oriented community of practice seeking to understand the innumerable ways in which AI impacts equality for better or worse. We work to empower ethics in AI so that it is deployed in a just, responsible, and inclusive manner.

View AIEI's Board of Advisors.

The Artificial Intelligence & Equality Initiative seeks to:

Build

Build the foundation for an inclusive dialogue—an Agora—to probe issues related to the benefits, risks, tradeoffs, and tensions that AI fosters.

Nurture

Nurture an interdisciplinary, intergenerational community of practice to rapidly address urgent challenges in the uses of AI and other novel technologies.

Establish

Establish a forum for those in positions where they must make considered choices and decisions about the development and deployment of AI applications.

Forge

Forge transparent, cross-disciplinary, and inclusive conversations and guided inquiries.

Empower

Empower ethics as a tool for making thoughtful decisions about embedding AI systems and applications in the fabric of daily life.

Featured Podcasts, Events, & Articles

Now is the Moment for a Systemic Reset of AI and Technology Governance

How can we ensure that the technologies currently being developed are used for the common good, rather than for the benefit of a select few? Anja Kaspersen and Wendell Wallach write that for effective technology governance to truly materialize, a systemic reset directed at improving the human condition is required.

Ethics, Digital Technologies, & AI: Southeast Asian Perspectives, with Elina Noor

In this "Artificial Intelligence & Equality" podcast, Senior Fellow Anja Kaspersen is joined by Asia Society Policy Institute's Elina Noor for a talk on how we frame discussions on AI ethics and governance matters. They also speak about social justice and the digital landscape in Southeast Asia.

Wendell Wallach

Carnegie-Uehiro Fellow, Artificial Intelligence & Equality Initiative (AIEI); Yale Interdisciplinary Center for Bioethics

Anja Kaspersen

Carnegie Council Senior Fellow, Artificial Intelligence & Equality Initiative (AIEI)

Tom Philbeck

Carnegie Council Senior Fellow, Artificial Intelligence & Equality Initiative (AIEI); Director, Carnegie Ethics Fellows; SWIFT Partners

Noah Wong

Research Intern, Artificial Intelligence & Equality Initiative (AIEI); Stanford University

How does AI impact equality, for better or worse?

Structural inequality is the result of a broad array of political, economic, social, and cultural factors. The socio-technical systems that are the result of introducing innovations into this mix have become increasingly destabilizing. The sheer ubiquity and speed by which AI-based systems are permeating our lives is disruptive of countless industries and institutions. Growing monopolies of proprietary data have and continue to rapidly empower digital elites and new digital alliances. And yet the understanding of exactly how social and technical systems interact and how to govern them globally, regionally, or locally lags far behind. To complicate matters, some applications of AI may actually reduce inequality or enhance equality in discreet ways.

AIEI is working to unpack this difficult and highly transdisciplinary terrain to ensure that AI is developed and deployed in a just, responsible, and inclusive manner.

Read more.

Why are we failing at the ethics of AI?

The last few years have seen a proliferation of initiatives on ethics and AI. Whether formal or informal, led by companies, governments, and international and non-profit organizations, these initiatives have developed a plethora of principles and guidance to support the responsible use of AI systems and algorithmic technologies. Despite these efforts, few have managed to make any real impact in modulating the effects of AI.

Read more.

Latest Podcasts, Events, & Articles

NOV 9, 2021 Article

Seven Myths of Using the Term “Human on the Loop”: “Just What Do You Think You Are Doing, Dave?”

As AI systems are being leveraged and scaled, frequently calls are made for, “meaningful human control” or “meaningful human interaction on the loop.” Originally an ...

NOV 3, 2021 Podcast

Time for an Honest Scientific Discourse on AI & Deep Learning, with Gary Marcus

In this episode of the "Artificial Intelligence & Equality Initiative" podcast, Senior Fellow Anja Kaspersen sits down with Gary Marcus, a cognitive scientist, author, and entrepreneur, ...

OCT 29, 2021 Article

Mind Control to Major Tom: First State Regulates Use of Neurotechnologies

One of the last frontiers of science remains the human mind – but not for much longer.

OCT 19, 2021 Article

Why the Social Contract Must Become the Technosocial Contract

The human condition is a technological condition. Technologies are at the heart of how we live together, understand ourselves, make meaning, know the world around ...