Large language models don’t need a pause on research, they need an immediate product recall.

Apr 26, 2023

Hilary Sutcliffe looks at AI governance from another angle . . .

I feel like I just escaped from a cult after watching Tristan Harris and Aza Raskin’s compelling presentation "The AI Dilemma." This video from a private gathering of technologists in San Francisco explains in easy-to-understand terms why large language models (LLM) like ChatGPT and Bard are so dangerous for individuals and society.

Harris and Raskin are founders of the Center for Humane Technology, and are also featured in The Social Dilemma on Netflix, a documentary about the dangers of social media. In "The AI Dilemma," they illustrate how the problems with large language models are not confined to the obvious ethical issues, like chillingly sexually inappropriate chatbots already embedded in Snapchat, or the genius level skills at research chemistry the software has developed on its own, which can tell anyone how to make nerve gas in simple steps. Harris and Raskin persuasively show that these problems and many others will only proliferate because the software is teaching itself and its designers don’t know how it does it, how to stop it, what the societal harms might be, or how to prevent them.

It is obvious that we do not need the "six month pause in the research" proposed by tech developers. We need an immediate product recall now, until companies and their designers can prove to society and regulators they can create software which is safe and can be designed with guardrails to prevent mass harm to individuals and society.

I feel like I broke free from a cult because I look at the tech in a different way now and can't understand how I fell for the hype and also how governments have not done this already. It’s not like a few people "drank the Kool-Aid" and succumbed to the narrative of AI inevitability and its essential importance to society. But somehow we have all had our drinking water spiked with Kool-Aid and have collectively been taken in by a narrative of specialness and inevitability peddled to us by the AI community in Silicon Valley.

I use the term product recall instead of moratorium, ban, or pause on research because it is in simple terms a faulty product which should have never been on the market in the first place and needs to be taken down instantly before more harm is done. It’s not even a product recall of the cost and complexity of say Toyota’s $3.2 billion global callback of 8.1 million cars because the gas pedal got stuck in the floormats, or Volkswagen’s recall of 11 million cars for cheating on its emission standards, which with legal bills is calculated to cost them $18 billion. It is just getting three companies to do the simple and cheap task of taking down software they are making no money out of at the moment, to do what they should have done in the first place: ensure it is safe for individuals and society before they put it on the market.

These companies marketing this AI software are not as special as they like to think they are. They are just trying it on like so many companies do, to make as much money as they can, or kill or keep up with their competitors, without any real thought to what happens to individuals and society in the process. What is needed to steer them away from this bad habit is a "pro-society" approach to innovation and regulation—a system which has the capacity to understand and prevent the broader negative impacts on society, as well as harms to individuals and the economy

So following the recall, when the products are off the market, agreement should be reached, perhaps convened by a global coalition of civil society groups and citizens, academics, politicians, multilateral institutions, regulators, and tech businesses, on what is acceptable, what the trade-offs that are for the good of society should be, and how technically and legally this could be achieved before the products are allowed back on the market.

A great starting point might be to resurrect the excellent International Congress for the Governance of AI convened (virtually in 2021) by Wendell Wallach, co-chair of Carnegie Council’s AI & Equality Initiative, which included this global coalition and was due to meet in Prague just as the COVID-19 pandemic hit. Count me in!

I need a hashtag for this: What about #LLMrecallnow?

Hilary Sutcliffe is a member of the AIEI Board of Advisors and the host of Carnegie Council's From Another Angle podcast series.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the author and do not necessarily reflect the position of Carnegie Council.

You may also like

JUL 24, 2024 Podcast

AI & Warfare: A New Era for Arms Control & Deterrence, with Paul Scharre

Senior Fellow Anja Kaspersen speaks with Center for a New American Security’s Paul Scharre about emerging issues at the intersection of technology and warfare.

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.

DEC 9, 2021 Podcast

Ethics, Governance, and Emerging Technologies: A Conversation with the Carnegie Climate Governance Initiative (C2G) and Artificial Intelligence & Equality Initiative (AIEI)

Emerging technologies with global impact are creating new ungoverned spaces at a rapid pace. The leaders of Carnegie Council's C2G and AIEI initiatives discuss ...