Who Decides Who Decides What Conversations Are Allowed About Artificial Intelligence?

Aug 18, 2022

David Post wrote a book entitled In Search of Jefferson's Moose: Notes on the State of Cyberspace, in which he poses the question: Who decides who decides? He asks this question about the Internet, using the historical lens of Thomas Jefferson and how he used data. The key people, Post argues, are not the decision-makers themselves—they are those who decide who gets to decide who holds the ultimate power. Shoshana Zuboff, the author of The Age of Surveillance Capitalism, also uses this lens to explore power in the Information Age, asking (and answering) Who knows? (surveillance capitalist corporations); Who decides? (the market); and Who decides who decide? (surveillance capitalists).

In the development, use, and purchasing of AI systems, it is important to ask questions about who holds power, what conversations we are—and are not—having, and who is directing these conversations. Who is deciding who the decision-makers are? Where is the real power in these conversations? Who runs and finances the high-profile events that set the narrative?

As Meredith Whittaker argues in her essay "The steep cost of capture," the major tech companies largely determine what work is―and is not―conducted on AI, by dominating both academia and the various panels initiated by international organizations or governments to advise on AI research priorities. A culture has developed in which those who speak up against the prevailing narratives risk getting defunded or shut down. In 2019 Whittaker co-authored a hard-hitting paper on the diversity crisis facing the entire AI ecosystem describing the current state of the field as "alarming." Whittaker and her co-authors from New York University's AI Now Institute posited that "the AI industry needs to acknowledge the gravity of its diversity problem, and admit that existing methods have failed to contend with the uneven distribution of power, and the means by which AI can reinforce such inequality."

In 2021, Timnit Gebru, Margaret Mitchell, Emily M. Bender and Angelina McMillan-Major, all respected scholars within their domains, presented an academic article challenging the use of large datasets for a number of reasons, including their environmental impacts and the hidden biases and opportunity costs of creating models to manipulate language. The insights shared in the piece have since become a cornerstone of work in the AI field.

But just before the publication of this article, Gebru, one of the co-authors, was abruptly fired from her role as co-leader of Google's Ethical AI Team for her involvement in this work and for raising concerns that Google was not doing enough to remove biased and hateful language from its AI training data. In Gebru's words: "Your life starts getting worse when you start advocating for underrepresented people. You start making the other leaders upset."

"It takes courage to step up and speak the truth," says Mary "Missy" Cummings, former professor of AI and robotics at Duke University and an authoritative voice calling for stronger regulation of autonomous technologies and vehicles. Cummings is also a vocal critic of the limitations of deep learning and the maturity of current AI systems—and has spoken out about the risks of being trolled online as a result. "It's a luxury for me to be able to speak truth to power. Not everybody has that luxury."

Those who voice concerns risk being marginalized as anti-innovation, anti-growth, and anti-future. Bender, another co-author of the above-mentioned paper and a strong voice cautioning against the "AI hype and overly focusing on strawmen problems," has shared publicly how she is often misquoted or experiences having her opinions discounted. AI researcher and critic Abeba Birhane has also cautioned against believing a singular narrative susceptible to media hype that "lacks the ability to critically appraise technology" before being imported into, or embedded in, our daily lives.

When ideas, ethical considerations, and life-altering implications are not openly discussed or critical discussions are discouraged, technologies cannot be challenged or changed before, or even after, they are deployed. Those with the biggest interests in the current social arrangements, some suggest, are carefully engineering and ensuring a social silence around the fallibilities of complex algorithmic technologies—now embedded and entrenched in our daily lives—and their potential to cause serious harm. These same technologies exacerbate inequalities, entrench power structures, can harm the environment, erode human agency, undermine political stability, and fundamentally change the global world order.

The result is what Gillian Tett refers to as a "social silence," an idea borrowed from the sociologist Pierre Bourdieu. Tett gives the example of the 2007-2008 financial crisis—as a journalist with the Financial Times, she was one of the few to warn about the risks related to the financial instruments which ultimately caused the crash. Insiders were not discussing these risks openly, she writes, while it was hard for outsiders like herself to pull together enough information to "join the dots."

Tett adds: "To be sure, by 2006 some individual bankers were feeling very uneasy about what was going on (and often wrote anonymous emails about this to my team)." We see clear parallels with the current state of discourse in AI. Many insiders who voice unease privately to us, will not do so publicly as they fear that speaking out would result in being silenced, sidelined, or fired.

Data is often compared to oil. Perhaps the most fitting part of that comparison are the parallels between the communication methods of the associated corporate interests. For decades, oil companies failed to speak honestly about the negative human and environmental impacts of oil extraction.

Instead of comparing AI to oil, the more likely parallel is perhaps to plastics. Revered in the 1950s, we now have widespread and irreversible harm caused by microplastics in the ocean and in our food chains. We don't have decades to get AI right, as we have seen from the long-term effects of micro-plastics and the irreversible harm to the planet. Ongoing and long-term harms can occur in the real world long after algorithms are made redundant, retired, or removed.

AI ethics is a growing field, with a body of literature and expertise. But proliferating policies have so far had little impact, often lacking substantive analysis and implementation strategies. Leading voices from the field of ethics are too often ignored or not heard in narrative-setting events. Wendell Wallach, co-author of Moral Machines and co-director of Carnegie Council's Artificial Intelligence & Equality Initiative, recently observed: "I realized that I was being intentionally excluded from key events for bringing perspectives on what making ethical choices entails."

Furthermore, just as doctors and lawyers are required to be licensed and adhere to a code of conduct, we need to require standards for computer scientists and engineers working in this space as a matter of best practice. Computer science and AI development is no longer merely an abstraction of theoretical math with little or no bearing on day to day life. It has become a defining feature of life, with deep and profound impacts on what it means to be human as well as how to safeguard the human environment.

In fundamental physics, CERN creates a safe environment for conducting replicable nuclear research and science by cultivating a culture of transparency and encouraging diversity of views. A similar approach in AI would seek to strengthen the anthropological and scientific intelligence around what type of computational methodology is needed to address specific problems. It would encourage open discussion of the ethical considerations around deploying different models in different environments where they will have widely different safety, data-related and downstream consequences—and encourage dissenting voices, even when it may impact on company valuations, political ambitions and decisions on when not to embed or deploy such systems.

Gary Marcus has called for a "CERN for AI" to address issues related to scalability of current iterations of AI foundation models and ensure that AI becomes a public good rather than just the property of a privileged few. Co-author of Rebooting AI: Building Machines We Can Trust, Marcus has also received hostility for publicly cautioning against the overpromising trap of AI systems. He describes the unwillingness to engage in a honest and inclusive scientific discourse about current limitations as being detrimental to responsible long-term development of safe AI applications.

The long-term consequences for society are already profound and widely felt—with longer-term consequences for democracy and equality than those of the global crisis ensuing the social silence in finance. Zuboff's three questions: who knows, who decides, and who decides who decides, uncover the power structures at play.

To ensure appropriate safeguards, equality or any kind of fairness in the use of AI and algorithmic technologies we need to create a culture that holds leaders and those with power accountable. It cannot be only the elites who decide who decides. Diverse voices need to be permitted to engage in critical discourse about what is working and at what purpose, by who and for whom.

Silences—and fear of silences—may evidence power structures or blind spots. And regardless of who decides, silences, whether intentional or unintentional, are something we always need to challenge.

Anja Kaspersen is a senior fellow at Carnegie Council for Ethics in International Affairs, where she co-directs the Artificial Intelligence & Equality Initiative (AIEI).

Dr. Kobi Leins is a visiting senior research fellow at King's College, London and an expert for Standards Australia providing technical advice to the International Standards Organisation on forthcoming AI Standards. She is also an AIEI Advisory Board member and the author of New War Technologies and International Law (Cambridge University Press, 2022).

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.