Why Soft Law is the Best Way to Approach the Pacing Problem in AI

Sep 29, 2021

The “pacing problem” happens when a fast-developing technology impacts on society too quickly for governments to protect people through laws and regulations.

Laws and regulations are “hard law.” In contrast, soft law refers to measures that are not directly legally enforceable but that can, nonetheless, sometimes create substantive obligations. Examples include guidelines, sets of principles, codes of conduct, private standards, and partnership programs. In many industries, hard law and soft law exist alongside each other.

There is still skepticism about the role of soft law in AI. Some posit that we must have hard law to protect people, and I understand why: when you look at the behavior of tech giants over recent years, it is difficult to say we can or should trust them to self-regulate.

However, what the skeptics often lose sight of is that self-regulation can indeed be a part of soft law, but it is far from the whole story. Other soft-law approaches can have more teeth and yield greater impact. One example comes from stem cell research: if you want to publish in any of the top academic journals, you need to demonstrate that you abide by agreed stem cell research standards. That has proved to be a powerful incentive for enforcing the standards in practice.

Together with my colleague Carlos Ignacio Gutierrez at Arizona State University’s Center for Law, Science & Innovation, I have been researching examples like this to discover what works well, what works at cross purposes, and what works less well. We compiled 634 AI soft law programs in a publicly available database. The soft law programs originate from different places around the world, and contain programs that cuts across industries and domains. We analyzed each one and tested them against over 100 criteria.

We found that about two-thirds failed to provide any public mechanism for ensuring the program will deliver on its promise. So soft law certainly isn’t a panacea for responsible use of AI and adjacent technologies. However, when turning this into future learnings based on past and current successes, two factors that make soft law approaches more effective stood out: credible indirect enforcement mechanisms and a perception of legitimacy. The stem cell example illustrates both.

Even if we might ideally prefer hard law approaches, the pacing problem means it isn’t reasonable nor responsible to wait for the legislative wheels to turn. Nanotech provides an example of how long this can take: I was part of discussions 15 years ago in which some insisted that only hard law could govern nanotech, and still in the United States there is no nano-specific hard law in place today.

While its big disadvantage is not being directly legally enforceable, soft law has many advantages over hard law. It is more flexible and adaptable. It is easier to experiment with soft law, seeing what works and abandoning what doesn’t. Soft law is normative and transboundary, whereas hard law is often enacted prematurely before agreement and understanding is reached, and is confined by national jurisdictions.

Governments are aware of these advantages: one surprise from our database was realizing how many examples of soft law are set up and led by public authorities. People usually think of soft law as being led by non-government entities: companies, non-governmental organizations, or other broad-based stakeholder bodies. While that does happen, in a plurality of cases it is a government that sets up a soft law mechanism, recognizing its advantages in speed and flexibility.

Autonomous vehicles provide an example: more than a dozen governments have set out guidelines for developing autonomous vehicles, telling companies how they want them to behave. Soft law, such as private standards being established by various standard-setting organizations, has a chance of working effectively in cases like this because companies want to avoid the government moving on to formal regulations, which could impose more costs and constraints.

That said, soft law often leads to hard law in other ways. When a set of principles becomes well-established in an industry, courts can decide to hold companies liable for damage that occurs from their failure to follow those principles. Insurers refusing to cover legal liability can be a powerful incentive for companies to abide by soft law guidelines.

AI poses particular challenges for soft law. First, its black-box nature makes it harder to achieve transparency. Second, barriers to understanding make it more challenging to get meaningful stakeholder participation.

And third, guidelines work better when every company has the incentive to follow them – so their products are interoperable with others, for example, or because they see reputational benefits. AI presents the more complex challenge of getting companies to follow guidelines that will hurt their profits, such as by not deploying ways to manipulate people.

While soft law certainly has its challenges, hard law also suffers from many weaknesses and limitations, especially for a rapidly evolving diverse technology like AI. Winston Churchill once remarked that democracy is the worst form of government apart from all the others. Soft law is far from a perfect tool to tackle all the challenges presented by embedding AI into our democratic and societal structures – but, pragmatically, soft law may be the worst form of governance, except for all the others.

Gary Marchant is a regents professor of law and director of the Center for Law, Science and Innovation at Arizona State University (ASU). His research interests include legal aspects of genomics and personalized medicine, the use of genetic information in environmental regulation, risk and the precautionary principle, and governance of emerging technologies such as nanotechnology, neuroscience, biotechnology, and artificial intelligence.

You may also like

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.

JUN 27, 2024 Podcast

AI, Military Ethics, & Being Alchemists of Meaning, with Heather M. Roff

Senior Fellow Anja Kaspersen and Heather Roff, senior research scientist at the The Center for Naval Analyses, discuss AI systems, military affairs, and much more.

JUN 17, 2024 Podcast

Linguistics, Automated Systems, & the Power of AI, with Emily M. Bender

In this episode, guest host Dr. Kobi Leins & University of Washington’s Dr. Emily Bender discuss why language matters in the development of technological systems.