Large language models don’t need a pause on research, they need an immediate product recall.

Apr 26, 2023

Hilary Sutcliffe looks at AI governance from another angle . . .

I feel like I just escaped from a cult after watching Tristan Harris and Aza Raskin’s compelling presentation "The AI Dilemma." This video from a private gathering of technologists in San Francisco explains in easy-to-understand terms why large language models (LLM) like ChatGPT and Bard are so dangerous for individuals and society.

Harris and Raskin are founders of the Center for Humane Technology, and are also featured in The Social Dilemma on Netflix, a documentary about the dangers of social media. In "The AI Dilemma," they illustrate how the problems with large language models are not confined to the obvious ethical issues, like chillingly sexually inappropriate chatbots already embedded in Snapchat, or the genius level skills at research chemistry the software has developed on its own, which can tell anyone how to make nerve gas in simple steps. Harris and Raskin persuasively show that these problems and many others will only proliferate because the software is teaching itself and its designers don’t know how it does it, how to stop it, what the societal harms might be, or how to prevent them.

It is obvious that we do not need the "six month pause in the research" proposed by tech developers. We need an immediate product recall now, until companies and their designers can prove to society and regulators they can create software which is safe and can be designed with guardrails to prevent mass harm to individuals and society.

I feel like I broke free from a cult because I look at the tech in a different way now and can't understand how I fell for the hype and also how governments have not done this already. It’s not like a few people "drank the Kool-Aid" and succumbed to the narrative of AI inevitability and its essential importance to society. But somehow we have all had our drinking water spiked with Kool-Aid and have collectively been taken in by a narrative of specialness and inevitability peddled to us by the AI community in Silicon Valley.

I use the term product recall instead of moratorium, ban, or pause on research because it is in simple terms a faulty product which should have never been on the market in the first place and needs to be taken down instantly before more harm is done. It’s not even a product recall of the cost and complexity of say Toyota’s $3.2 billion global callback of 8.1 million cars because the gas pedal got stuck in the floormats, or Volkswagen’s recall of 11 million cars for cheating on its emission standards, which with legal bills is calculated to cost them $18 billion. It is just getting three companies to do the simple and cheap task of taking down software they are making no money out of at the moment, to do what they should have done in the first place: ensure it is safe for individuals and society before they put it on the market.

These companies marketing this AI software are not as special as they like to think they are. They are just trying it on like so many companies do, to make as much money as they can, or kill or keep up with their competitors, without any real thought to what happens to individuals and society in the process. What is needed to steer them away from this bad habit is a "pro-society" approach to innovation and regulation—a system which has the capacity to understand and prevent the broader negative impacts on society, as well as harms to individuals and the economy

So following the recall, when the products are off the market, agreement should be reached, perhaps convened by a global coalition of civil society groups and citizens, academics, politicians, multilateral institutions, regulators, and tech businesses, on what is acceptable, what the trade-offs that are for the good of society should be, and how technically and legally this could be achieved before the products are allowed back on the market.

A great starting point might be to resurrect the excellent International Congress for the Governance of AI convened (virtually in 2021) by Wendell Wallach, co-chair of Carnegie Council’s AI & Equality Initiative, which included this global coalition and was due to meet in Prague just as the COVID-19 pandemic hit. Count me in!

I need a hashtag for this: What about #LLMrecallnow?

Hilary Sutcliffe is a member of the AIEI Board of Advisors and the host of Carnegie Council's From Another Angle podcast series.

You may also like

Blue vector

MAY 24, 2023 Article

Silicon Valley is knowingly violating ethical AI principles. Society can’t respond if we let disagreements poison the debate.

With criticism of ChatGPT much in the news, we are also increasingly hearing about disagreements among thinkers who are critical of AI, writes Carnegie-Uehiro Fellow ...

MAY 25, 2022 Podcast

New War Technologies & International Law: The Legal Limits to Weaponizing Nanomaterials, with Kobi Leins

In a fascinating podcast, Senior Fellow Anja Kaspersen speaks with Kobi Leins about her new book "New War Technologies and International Law: The Legal Limits ...

JUN 24, 2021 Podcast

AI & Equality Initiative: Think Before You Code

ThinkTech is an independent nonprofit association, started by and for students, young technologists, and professionals working to shape the impact of artificial intelligence and other ...