Artificial Intelligence & Equality Initiative

Can AI be deployed in ways that enhance equality, or will AI systems exacerbate structural inequalities and create new inequities?

The Artificial Intelligence & Equality Initiative (AIEI) is an innovative impact-oriented community of practice seeking to understand the innumerable ways in which AI impacts equality for better or worse. We work to empower ethics in AI so that it is deployed in a just, responsible, and inclusive manner.

View AIEI's Board of Advisors.

For AIEI-related inquiries, please contact Carnegie Council Chief of Staff Melissa Semeniuk: [email protected]

The Artificial Intelligence & Equality Initiative seeks to:

Build

Build the foundation for an inclusive dialogue—an Agora—to probe issues related to the benefits, risks, tradeoffs, and tensions that AI fosters.

Nurture

Nurture an interdisciplinary, intergenerational community of practice to rapidly address urgent challenges in the uses of AI and other novel technologies.

Establish

Establish a forum for those in positions where they must make considered choices and decisions about the development and deployment of AI applications.

Forge

Forge transparent, cross-disciplinary, and inclusive conversations and guided inquiries.

Empower

Empower ethics as a tool for making thoughtful decisions about embedding AI systems and applications in the fabric of daily life.

Featured Content & Analysis

SEP 29, 2023 Article

Envisioning Modalities for AI Governance: A Response from AIEI to the UN Tech Envoy

JUN 5, 2023 Article

Are We Automating the Banality and Radicality of Evil?

Current iterations of AI are increasingly able to encourage subservience to a non-human master, telling potentially systematic untruths with emphatic confidence.

SEP 6, 2023 Podcast

Can We Code Power Responsibly? with Carl Miller

In this thought-provoking episode, Carl Miller tackles the pressing questions: Can we code power responsibly? And moreover, how do we define "power" in this context?

Wendell Wallach

Carnegie-Uehiro Fellow, Artificial Intelligence & Equality Initiative (AIEI); Yale Interdisciplinary Center for Bioethics

Anja Kaspersen

Carnegie Council Senior Fellow, Artificial Intelligence & Equality Initiative (AIEI); IEEE

How does AI impact equality, for better or worse?

Structural inequality is the result of a broad array of political, economic, social, and cultural factors. The socio-technical systems that are the result of introducing innovations into this mix have become increasingly destabilizing. The sheer ubiquity and speed by which AI-based systems are permeating our lives is disruptive of countless industries and institutions. Growing monopolies of proprietary data have and continue to rapidly empower digital elites and new digital alliances. And yet the understanding of exactly how social and technical systems interact and how to govern them globally, regionally, or locally lags far behind. To complicate matters, some applications of AI may actually reduce inequality or enhance equality in discreet ways. AIEI is working to unpack this difficult and highly transdisciplinary terrain to ensure that AI is developed and deployed in a just, responsible, and inclusive manner. Read more.

Why are we failing at the ethics of AI?

The last few years have seen a proliferation of initiatives on ethics and AI. Whether formal or informal, led by companies, governments, and international and non-profit organizations, these initiatives have developed a plethora of principles and guidance to support the responsible use of AI systems and algorithmic technologies. Despite these efforts, few have managed to make any real impact in modulating the effects of AI. Read more.

Latest Podcasts, Events, & Articles

NOV 10, 2021 Article

Why Are We Failing at the Ethics of AI?

As you read this, AI systems and algorithmic technologies are being embedded and scaled far more quickly than existing governance frameworks (i.e., the rules ...

NOV 9, 2021 Article

Seven Myths of Using the Term “Human on the Loop”: “Just What Do You Think You Are Doing, Dave?”

As AI systems are being leveraged and scaled, frequently calls are made for, “meaningful human control” or “meaningful human interaction on the loop.” Originally an ...

NOV 3, 2021 Podcast

Time for an Honest Scientific Discourse on AI & Deep Learning, with Gary Marcus

In this episode of the "Artificial Intelligence & Equality Initiative" podcast, Senior Fellow Anja Kaspersen speaks with Gary Marcus to discuss the need for an open ...

OCT 29, 2021 Article

Mind Control to Major Tom: First State Regulates Use of Neurotechnologies

One of the last frontiers of science remains the human mind – but not for much longer.