Deus ex machina or Diabolus ex machina?

Jun 15, 2022

In so many sci-fi movies, the resolution of the plot is that of deus ex machina―literally, a "god in the machine." In these movie endings, a highly improbable development solves a seemingly intractable problem, leading to a happy ending, without reflection, or even reason. Everyone leaves the movie comforted that technology has won the day.

In the 1980s, data scientist Josef Weizenbaum warned about the "magical thinking" around technology, saying that the only people in awe were the people who misunderstood the technology. Not a small part of the exorbitant investment in AI can be traced back to magical thinking. Last year, for example, an EU Parliament document hyperbolically claimed that "AI can be thought of as the fifth element after air, earth, water, and fire."

This kind of magical thinking is not without a cost. As a result of hyping the possibilities of the technology, the potential risks and limitations of technology are often under-communicated and poorly understood. An honest and scientific discourse on AI is frequently lacking.

In truth, many supposed “AI solutions” are currently nothing more than corporate snake oil and pixie dust. Far from the cutting edge of AI research, they merely involve bringing larger data sets and more computing power to bear on problems that could often be solved more cheaply and effectively by properly trained humans and investments in organizational culture. During perilous times, when global stability is weakening and tension points are multiplying, there is a risk that the promise of what we can accomplish by machine learning and AI will supersede ethical considerations around known limitations.

Such data-centric solutions can be flat-out dangerous when embedded prematurely in critical functions alongside humans who are ill-prepared to understand and work around the limitations of the algorithms and the problems associated with large datasets.

Wealthy companies are pouring enormous amounts of resources into technological solutionism, much like how during Mozart’s lifetime, German monarchs directed huge amounts of funding at composers. As a result, the composers of this era were some of the most prolific as far as creative music production; the monarchs’ investments helped to add value for generations of music lovers. But the current influx of investment is not designed to create something beautiful for the world, nor even to necessarily solve problems in the most appropriate way―it may be to keep tech company valuations and stock prices inflated, despite the shortcomings, risks, and failures of leaders to deliver on technologies after decades of promise. It may be to support value systems such as "long-termism” and “existential risk” which are themselves particular framings that prioritize certain values and are, again, contentious and contestedwith good reason.

Both corporate and privately owned institutions play an ever increasing role in shaping what are tolerated as valid AI risks and the timeline of the ethical conundrums―a phenomenon that has previously been seen in other fields, including nuclear and bioweapons research. These attempts may result in "relational inequality"―those most affected have the least say, according to political theorist Emma Saunders-Hastings. "Some people’s altruism puts other people under their power," she writes.

Arguably, strategic philanthropies, often run by private individuals with their own particular visions and goals, generate potentially "objectionably hierarchical social and political relationships." Large sums of money can define questions and narratives. The resulting risk paradigms are not necessarily inclusive or comprehensive, and lack the ability to give voice to those less funding or power. NYU sociologist Mona Sloane, a member of the AIEI Board of Advisors, and Twitter director Rumman Chowdhury, refer to these as "narrative traps," cautioning decision-makers to be “mindful of the kind of pitfalls of the stories that we tell each other as we do our work and engage professionally with these topics. They are warning signs or alarm bells that should go off every now and then."

In fact, private individuals and makeshift philanthropic institutions are increasingly defining the narrative of what amounts as risks without much public scrutiny. This results in a lack of understanding and oversight of the impact of philanthropic forces on the collective thinking, scope of inquiry, corporate or policy response to grapple with the inherent tension points, and tradeoffs deriving from the entrenchment of AI into our daily lives.

A strategic reorientation, re-envisioning ethics, and a new funding framework are needed for the interrogation of complex and powerful systems to enable the free and realistic discussion of risks and limitations and for frank and fearless science-based discussions about what problems data and AI can and cannot solve, and what new ones the use of such technologies pose.

Whenever we see or hear something that resembles deus ex machina, or an inexplicable use of technology that resolves itself too simply or easily, we need to check our own "magical thinking." We have to go back to basics and ask what problem it is that we are trying to solve, and how and whether technology, or AI, is the best solution, in consultation with those who are going to be affected by any proposed solutions. We need to be aware of what values are being represented―and who is being served―and that the most pressing problems we face as as a global society are being solved.

An architect recently posted on Twitter that they wish that people would stop building see-through stairs because “people wear skirts." If those affected by technology do not have a say in how they are built, the same issues arise with data and AI. Diversity on teams whilst building the tools, and review of the potential impacts prior to use and throughout the life cycle are critical. Significant risks are overlooked. Products are oversold. Experts speak past each other.

We need to talk about power, and question whose interests or values are driving the "behaviors" of the "machine," each and every time. Deus ex machina approaches are not enough to manage risk and ensure equality now and into the future.

Anja Kaspersen is a senior fellow at Carnegie Council for Ethics in International Affairs, where she co-directs the Artificial Intelligence and Equality Initiative (AIEI).

Dr. Kobi Leins is a visiting senior research fellow at King's College, London; expert for Standards Australia providing technical advice to the International Standards Organisation on forthcoming AI Standards and AIEI advisory board member. She is the author of New War Technologies and International Law (Cambridge University Press, 2022).

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.