Two Core Issues in the Governance of AI, with Elizabeth Seger

Mar 22, 2024

Which is more dangerous, open source AI or large language models and other forms of generative AI totally controlled by an oligopoly of corporations? Will open access to building generative AI models make AI more democratic? What other approaches to ensuring generative AI is safe and democratic are available?

Carnegie-Uehiro Fellow Wendell Wallach and Elizabeth Seger, director of the CASM digital policy research hub at Demos, discuss these questions and more in this Artificial Intelligence & Equality podcast.

For more from Seger, read her recent article on AI democratization.

Two Core Issues in AI Governance Spotify podcast link Two Core AI Governance Issues AIEI Apple podcast link

WENDELL WALLACH: Hello. I’m Wendell Wallach. My guest for this podcast is Elizabeth Seger, the director of the digital policy team at Demos, which is a cross-party political think tank in London. She is also a research affiliate at the AI: Futures and Responsibility project at the University of Cambridge. What I find fascinating about Elizabeth is that she has become a thought leader on two of the most critical issues in the governance of artificial intelligence (AI), issues that will help determine whether AI might enhance equality or exacerbate structural inequalities. The first of these issues is whether large AI models, particularly generative AI, are potentially too dangerous to be the AI equivalent of open-source software.

Welcome, Elizabeth.

ELIZABETH SEGER: Thank you for having me.

WENDELL WALLACH: It is wonderful to have you. Please clarify for us why the question of whether AI, large language models (LLM), and other frameworks should be controlled by large corporations or open source is such a controversial issue.

ELIZABETH SEGER: The open-source debate around large language models and highly capable AI has become such a big deal and so controversial because it is a huge control question. It is a high-stakes control question.

On the one hand, not open sourcing a model or allowing large corporations to have sole control puts the control of these models in the hands of a very few large tech companies, and doing this could prevent other players from being able to be involved in what is promising to be a very financially lucrative and beneficial AI industry.

On the other hand, open sourcing of systems could put control of these systems in the hands of the wrong people, and there is a lot of concern around the extent of damage that could done by putting highly capable AI systems in the hands of malicious actors.

So we have a massive control problem on both sides and high stakes if we get the question wrong.

WENDELL WALLACH: I have heard you say that you think this distinction between open source and corporate control is really a false dichotomy. Please clarify.

ELIZABETH SEGER: Not necessarily the distinction between open source and corporate control but the idea that we are either talking about open-source systems or completely closed-source systems and having those be two completely separate entities. This is a false dichotomy. I think there are actually three false dichotomies at play here.

The first one is the idea that AI systems are either always fully open versus fully closed. There is some good work out that has become quite popular by Irene Sullivan, where she outlines how there is a large spectrum of model-sharing options, from having a model completely closed to fully open. There can be application programming interface (API) access and different kinds of staged-release options in between, so there are a lot of other options we can explore in this space.

The second false dichotomy is around the idea that when we talk about open versus closed we are saying that all AI models should be open versus all AI models should be closed. That is another false dichotomy. Oftentimes the discussions are just around whether some of the most cutting-edge, highly capable, frontier AI systems should be open source, so it is possible that we can say that some systems should absolutely be open source where with others we need to be more careful about it.

The third false dichotomy is the idea that when we say open versus closed we are saying that a model has to be always open versus a model has to be always closed, and this is also a false dichotomy. There are options for starting out with a model being closed while we study the impacts and how society is interacting with the model and then can make decisions about whether or not to open it further down the line, so I think this is a discussion that is becoming unduly polarized, which is making it hard to make progress, and we can do better by addressing these false dichotomies in the discussion.

WENDELL WALLACH: Before we dive into what some of the different frameworks might be, let’s just talk about the corporate model itself and whether the need for corporate control of risks is valid at all or whether it is only valid in some specific cases and whether those cases could be addressed in other ways if we had an open-source model. I am among those who is deeply concerned that if we give the corporations total control over these large AI models we will exacerbate inequality, are surrendering unbelievable power to the leading AI oligopoly for the next hundred years, and that barriers to entry will be tremendous for others.

On the other hand, I do listen to all these fears about risks, and I am uncomfortable with the prospect that if we have open source we have all kinds of actors who want to use the products for socially undesirable purposes or purposes that have social implications that we just do not want to have moving forward.

How do you see this? Do you see that both sides of the argument are truly strong or do you think that it is because they are caught up in this false dichotomy that they are not looking more creatively at ways forward?

ELIZABETH SEGER: I think there is another dimension at play, which is the idea that AI encapsulates way too many different kinds of technologies, from extremely narrow AI systems that pose very little risk and require very little investment relatively to develop through to the most cutting-edge foundation models that you pour billions of billions into training. This is one issue. This umbrella is way too large, so when we are thinking about corporate control over a model we have to think about which models we are talking about and also which models pose which risks. I think it is a good idea to keep that in mind as this conversation goes forward.

In terms of the arguments around corporate control of these models, maybe the most convincing argument for corporate control that I have heard is that if you are going to have model that is extremely dangerous, by all means let it be controlled by just three or four entities, and then it is easier to identify and easier to regulate. I don’t know that I buy that argument, but that is probably the most convincing one that I see.

You point out the key problem here, which is worrying about the power and influence landing in the hands of a few main actors for what could become an incredibly financially lucrative technology that has huge impacts on the well-being of populations, and we can see massive socioeconomic divides between countries and AI controllers, leaders and laggards, so how do we navigate that space?

Again, this question of open source. If we are just talking about some of the more cutting-edge technologies, the ones that cost billions of dollars, I think we want to regulate this and have measures in place to evaluate models before their release, to identify potentially dangerous capabilities, and a huge part of the discussion will lead up to the EU AI Act. One of the main arguments against this approach for regulating is saying that the burden of this regulation is going to be too much on new developers in a new industry that is trying to develop.

I think the key thing here is that again we are not talking about all AI models. We are drawing a line and saying that above a certain size or above certain capabilities there is a line that triggers having more in-depth evaluations, and above that line we start having these more stringent requirements for safety evaluations, and these are requirements that the big developers of AI systems, the Metas and OpenAIs of the world can afford engaging with these evaluations and with upholding these safety requirements, so if anything, I would see it as holding companies to these requirements as capping their power a bit and allowing other entities to try to catch up because if you are developing smaller systems and trying to enter this AI space and you are not meeting that threshold above which you have to do these more stringent safety evaluations that opens up the environment.

That is one key thing, to think about the technologies we are discussing. Most of the concerns around safety are around these frontier AI models. I think it should be rather uncontroversial to say we can be more stringent about that frontier without necessarily having to limit the distribution, sharing, and access to some smaller models that we can be less concerned about.

WENDELL WALLACH: As I recall, on the EU model the threshold is explicitly a risk threshold, so that they are categorizing risks and saying that above certain levels of risk these regulations kick in.

ELIZABETH SEGER: Yes. They have a risk threshold, but one way they are defining the risk threshold—there are a few different ways—is saying that: “Above a certain model size we will assume that anything above this model size,” and I can never remember if it is 1025 floating-point operations per second (FLOPS) or 1026 FLOPS because one is the U.S. threshold and the other one is the EU threshold, and they did that just because they had to be different from each other. I don’t remember which one is which right now, but it is 1025 or 1026, and they are defining that as one of the criteria, saying that if it is above this size we are going to assume it is in this high-risk category. This is very much a line saying that future AI systems, next generation, what you guys are going to come up with next, we are going to start looking more closely at this.

We know that the compute threshold is a terrible proxy for model capability. Ideally we would just be evaluating on model capability except we don’t really know what capabilities we are looking at yet, so it is a terrible proxy for model capability, but what it is is nicely defined and clean. It is a clean cutoff. It just says that above that threshold you need to be held to more stringent model-evaluation requirements and safety evaluations.

What we need to do now is take the next step and say: “Okay, you do those safety evaluations, you find X, and then what?” We have not made that step. We don’t know—you find Xand then you are allowed to share the model or you’re not allowed to share the model? What happens next? That is the next step in the discussion.

WENDELL WALLACH: The discussion is still directed at something which looks like it is two or three years off at least. That gives us time to see what the capabilities are that are emerging from these larger models.

ELIZABETH SEGER: I don’t know if it is two or three years off. It is at least putting requirements in place to start doing these evaluations. There are groups like the National Institute of Standards and Technology (NIST) and the Defense Information Systems Agency in the United States and the AI Safety Institute in the United Kingdom are doing some great work trying to more precisely identify what risks and what capabilities should act as triggers for different requirements.

WENDELL WALLACH: If I have understood you correctly, the trigger is at a particular size you then have to start looking at what risks and you have to demonstrate ways of controlling those risks to allow open access.

ELIZABETH SEGER: Not yet. We have not made the next step as to when access is allowed. We have just said: “Above a certain size you need to start doing your model evaluations and you need to start red-teaming, and you need to start showing that you are doing these safety evals.” We have not yet made the next step to say: “And then, depending on what you find, these are the different model-sharing options you are allowed to do.” That is what a lot of people are working on right now. The Partnership on AI has a good project on that that is ongoing, so I think we will see a lot of development on those particular model-sharing requirements based on what you find with the safety evaluations in the next year.

WENDELL WALLACH: Do you read that as giving license to those who want to create new generative models that are below that threshold to go ahead and do so or to allow those new models to be open source or at least the equivalent of open source?

Maybe we should go back for a moment because we use this language “open source” because that was what got applied within software, whether software was made openly available and could be tinkered with and how that was managed and so forth, but it is not clear that the open-source language is exactly appropriate when we are talking about large language models and other generative AI.

ELIZABETH SEGER: That is a very good point. You’re right, we should take a giant step back here and flag that when we talk about open-source AI very rarely do we mean open source in the strong sense. Open source has a very specific meaning with respect to software that takes into account certain license requirements for openness and nondiscrimination, the software being free, and we have not yet figured out how to define open-source AI and what means.

Part of the problem is that when you talk about open-source software it is just the code you are talking about. When we are talking about open-source AI or model sharing and AI there are so many components that go into an AI model. You have the inference code and you have the training code as well as the model weights. There is the data. You can have background documentation around how the model was built, and all of this can be shared or not independently of each other.

The Open Source Initiative has an ongoing multistakeholder project trying to get a more precise definition of what specifically we mean by open-source AI. Very often when people say “open-source AI” all they mean is that model weights are being shared.

WENDELL WALLACH: Do you mind defining, even crudely, what “model weights” means?

ELIZABETH SEGER: What are model weights? Model weights are the variable or numerical value that are used to specify how an input like text describing an image is transformed into the output, so it would be something like the image itself. You could type into an image generator, “a dog wearing a hat,” and the model weights are the things that determine how it actually turns into an image of the dog wearing the hat.

A good analogy for model weights that I have heard has to do with .mp3 files on your computer and playing .mp3 files because model weights alone are just a string of numbers and by themselves you cannot do much with them. You have to have an infrastructure to interpret them. It is like having an .mp3 file—that is your model weight—you put the .mp3 file on your computer. If you just open up that .mp3 file, it would just be a random string of numbers and you would have no idea what to do with it, but then if you have an .mp3 player—that would be the rest of the model, in this case notably the inference code—that allows you to play the .mp3 file and produce the beautiful music. That is how I like to think about model weights, and we can move on from there.

Going back to the conversation around what is open-source AI, oftentimes when we talk about open-source AI people are just talking about sharing the model weights. The reason people care so much about the model weights is because without the model weights you do not know how the model gets from its input to the output, and you cannot manipulate the model either and change and manipulate how it goes from whatever that input text is, something like the dog wearing a hat, to the output. What it means is that a lot of the harms from open sourcing can be achieved by being able to manipulate, change, or add on to those model weights or to fine-tune the model weights, which means to train them to do something much more specific in a specific way.

The harms around open sourcing largely just have to do with model weights, but if you want to think about the benefits of open sourcing, often in order to reap the benefits of open sourcing an AI model you need access to much more than the model weights. You need the training code to understand how those model weights were derived, you need the inference code to be able to run the model, and having all the more information you can have about how the model was developed, what the iterative process looked like, and what kinds of decisions the developers made are all insights into model development that help you dig into the model, its foundations, how it was developed, and how to be able to replicate those processes.

There is an imbalance there that I think a lot of people do not quite grasp all the time, the fact that a lot of the harms just come from sharing the weights. For a lot of the benefits you need much more comprehensive access to the model.

WENDELL WALLACH: Since we are getting into the weeds, it might be helpful for listeners if we clarify whether the model weights themselves are fixed. It is clear from what you said finally that if you can get into the training code and so forth you may be able to change the way the model weights have been determined, for example, but is the determination or the changing of those weights largely a human-centric activity where humans change the way those are determined or utilized, or are model weights being changed on the fly by the AI system itself?

ELIZABETH SEGER: This is starting to get into some technical ground that I do not have the best foundation on. For the most part when we talk about model weights we are thinking about how humans will go about modifying those weights by retraining a model with new data or fine-tuning the model. Fine-turning is a process of training where you basically make it more precise; you train it on a specific data set to make the model really good at doing a particular thing.

That said, I know that when people think about runaway AI-type scenarios they are thinking about models that can start self-modifying. This is not outside the realm of possibility. There might be aspects in which this work is underway, but this is technical ground which is getting beyond my area of expertise.

WENDELL WALLACH: I think where we are right now is understanding that these are relatively fixed and the major alterations, at least at this stage of the game, are modifications that humans can make in order to use, for example, a large language model or large image model for a specific purpose—they can already use them for deep fakes, but there may be kinds of uses they want to get into that the system is either not adapted for or may even have some constraints built into its utilization in that way.

ELIZABETH SEGER: Yes. I think this is why access to model weights is important, especially when you are talking about large language models that can cost billions of dollars just to train. If you do not have access to the model weights, and let’s say you did have the inference code and the training code, you could completely retrain the model and do data sets that will have the model do what you want it to do, but that would cost you billions in compute to do that.

This is where fine-tuning has been both amazing in terms of a development but also opens the doors for malicious use because it takes relatively little data and relatively little compute to modify a model to do something very specific if you have access to the weights such that you are able to manipulate those weights through fine-tuning. You can basically train the model to do something very specific—and perhaps that specific thing is untoward—with relatively little investment.

That said, there are ways that we can provide access to fine-tune models without necessarily having to give open access to the weights. There are fine-tuning APIs and research APIs. I know we are thinking of getting into it later, but this may be one of those middle-ground approaches toward providing access for people to study and modify models for great applications without necessarily providing unchecked access for people who might try to use them for malicious ends.

WENDELL WALLACH: Let me see if I can reiterate some of this in case it is not clear for some of our listeners.

The parameters, the size of these models, have to be much larger before some of the EU constraints at least kick in and demand much more serious oversight of what the system can do and the inherent risks. Inherent risks often are referring to what the system might actually do itself without direct human control or a lack of human control.

In this realm that we are in now it is still largely determined what can be done or not done by the way the humans have designed the system. At the moment the data that has been accumulated in these large models—the language models, the image models, and the sound models—is all under the control of these large corporations. If I understand it clearly, they are neither releasing what is the data or where the data came from within their models nor are they releasing the model weights, but there are means to modify some of that without even knowing what it is if you add an additional piece of code, an additional application that might sit between what the model does and the actual output or what the instructions going to the model are.

ELIZABETH SEGER: Yes. Different corporations have different approaches to how they release their models. If you think of, for example, OpenAI, they release their models behind an API. Like you said, it is that thing that sits between the model itself and how the users interact with it.

WENDELL WALLACH: That is ChatGPT for the people who have used that.

ELIZABETH SEGER: Yes. That is how you interact with the model. You put in your text prompt, “Tell me how to make chocolate ice cream,” and then it develops a response for you, telling you how to make your chocolate ice cream. That kind of interface is one kind of API, but there are other kinds of APIs that give you varying degrees of access to the model, so there are, for example, fine-tuning APIs that will allow you to actually fine-tune a model by training it on new data to do a specific thing without you actually having to have access to those model weights, so it acts as that interface between you and the model weights. That is really useful.

But then there are other companies that have developed these large models, notably Meta in this case, and then released them. I don’t want to say “open source.” I will get a slap on the back of the hand for saying “open source,” because technically it is not open source because of the license restrictions, but it is openly downloadable, the weights as well as the code.

WENDELL WALLACH: A good programmer or a good technician could actually get into those weights and play with them.

ELIZABETH SEGER: Yes, get into the weights, play with them, build on top of the model to create new applications and new use cases without necessarily having to have all the capital to train the model yourself. You could be a relatively small developer or researcher and learn from, understand, and really dig into the model without having to sit there and figure out how to come up with several billion dollars to train a model of that size and scale yourself.

WENDELL WALLACH: I think what this should make clear to everyone is the existential risk, the possibilities of these systems, have profound emerging properties that could be dangerous and even beyond human control. That has not fully kicked in yet.

We are still pretty much in the realm that the risks are largely misused by humans of these models. Once in a while there is something that the model might do and might give an output that is poorly understood and therefore utilized in a way it should not be utilized, but that is not the major risk we are looking at at the moment. We are looking at whether the models have been developed in a way in which there are some constraints on prejudicial speech, overuse of biases, and so forth. Those sometimes get solved by larger models, but that is not where our big problems come at the moment.

At the moment we are dealing with what applications humans will use these generative AI models for. Will they use them for constructive purposes or might they use them for mischievous purposes, such as interfering with an election or producing a deepfake where President Biden falls flat on his face, or bombarding the internet with information that is just false as a way of defusing something that for a political or economic reason you do not want perpetuated. Is that largely correct in your understanding?

ELIZABETH SEGER: Yes. Right now it is mostly a matter of: “We have these systems. They are tools, and like any tool it depends whose hands the tools are in.” That said, these can be quite powerful tools, and that is what we are worried about going into the future.

You hit on one particular risk case around, for example, messing with our information ecosystems, producing disinformation and deepfakes. Another one that people are worried about is around the use of generative AI and bio risk, so using these systems to develop new toxins or pathogens, so you can understand how those systems getting into the wrong hands could be quite dangerous. Part of the thinking here is around how do we control against those misuse cases, and there is a variety of options. One of those options is thinking about how the model itself is being distributed.

WENDELL WALLACH: That is a lot of what we have been talking about here in terms of whether the access to the models is open or at least the access to certain critical elements of the model is open or the access to those elements is totally controlled and under the control of OpenAI, Google, or somebody else who is building and distributing an API so you can at least have some access to using the model.

ELIZABETH SEGER: Yes, and I think it is about getting that balance right. You want to make sure that there is enough access for people to do the research, to have transparency into how the systems are built, what’s going into the systems, and being able to contribute to safety research. That is another huge benefit of open source, where you have an open-source system, whether you are talking about AI or software, and so many more people—members of the community—can contribute to safety research and to identifying bugs and proposing fixes. You don’t get that with a closed-source model.

There are options. We can work toward these middle-ground kind of approaches. We could have bounty programs, for example, where you have rewards out to encourage people to report flaws that they find in these systems, but it still does not quite parallel the open-source ecosystem where people can directly contribute.

WENDELL WALLACH: The open ecosystem—I am not going to use the word “source” as a way of getting away from this open-source application to software and software development—has sometimes been lauded as perhaps accelerating the ability to create defensive tools, guardrails, or other things that might be applicable. Do you see that as a valid point or do you think that is probably wrong?

ELIZABETH SEGER: I think it is absolutely a valid point, and it is one that we see time and time again with the development of open-source software.

I do have a concern, though, that that analogy to open-source software will not always translate perfectly into the realm of open-source AI. My reasoning here is that in the case of open-source software you have the open system, the vulnerabilities are relatively easy for malicious actors or “white hat” actors to find and identify, and then you can fix them and implement those fixes. So, yes, it opens up vulnerabilities, but you can fix them and address them just as quickly.

However, in the case of AI systems the vulnerabilities are still going to be easy to identify, manipulate, and take advantage of, but the fixes are potentially going to be much more difficult. The fixes here are like the alignment problem for AI and big safety and explainability issues that we have been throwing millions and millions of dollars at and are making incremental progress in comparison to the rate of technological advancement.

I think there is absolute truth to open ecosystems, whether you are talking about open-source software or open-source AI. Those open ecosystems are going to allow for great progress in defensive capabilities. I am not sure in the case of AI that you are going to see that defensive capability keep up with the progress of development. I don’t know how that is going to pan out, but I think we need to be open-eyed about the fact that that analogy might not translate perfectly.

WENDELL WALLACH: Let’s talk about the value-alignment issue for a moment. For those who are not aware, “value alignment” refers to broadly aligning the values of the system, its output, and actions—if it actually is in a position to take actions—with those generally accepted by humans. There are a lot of nuances to how that is framed or not framed. It is sometimes about the individual, it is sometimes about ethics, but it is this fuzzy word “values,” which some of us consider not very helpful because values are very broad and often conflict with each other, and you need something more robust than that to get ethical understanding or behaviors out of a system.

You alluded to the slow progress that is being made in solving what is called broadly this value-alignment challenge. Do you think that is solvable with any of the approaches we are looking at right now or do you think that is more like an arms race between the speed at which the models are being developed and the lack of funding and much slower speed at which the value alignment framing or guardrails are being developed?

ELIZABETH SEGER: The honest answer here is that I don’t know if it is solvable. This is a place where my technical background runs up against a wall. I don’t have the best insight as to how that research is going.

What I do know is that if you look at the investment that is going into safety research, whether it is the value-alignment problem, making sure AI systems make the kinds of decisions we would want them to make, or you are looking at explainability research, just being able to understand how the systems are making the decisions they make, the sheer amount of funding that is going into these problems is minuscule compared to the funding that is going into developing increasingly large and increasingly capable and powerful models.

I think this is a case where, if we are serious about this AI thing going well, we need to see much more increased investment into the safety research. I think this is one place where a lot of the arguments around open-sourcing AI systems will settle on the importance of open sourcing to promoting safety research, getting more people involved in the safety research, and my thinking is: Well, that’s great, but if our goal here is progressing with AI safety research, there are other things we need to be doing first to take that goal seriously, one of which is just putting so much more funding into those areas of research. We will spend several billion—it is looking like we might end up spending up to $10 billion—on training a single model. On the other hand, in the United Kingdom we are seeing the UK government put $100 million toward the AI Safety Institute. That is a huge sum, but it still does not compare to what companies are putting into training the next iteration.

I am all for open sourcing to promote safety research and open sourcing to find vulnerabilities, but there are other things we need to be doing as well, and funding is one of those. Companies could commit a certain percentage of their profits to go back into safety research if they were really serious about these safety problems.

WENDELL WALLACH: I appreciate your candor in how far you even understand this problem—and I have to admit I don’t understand it very well, even though, as some listeners will know, I am the coauthor of the first book that ever looked broadly at whether we can implement moral decision-making faculties into AI and sensitivity to moral considerations. I think the problem is not only that not enough money is going into developing strategies, but not enough money is there at all to even look at whether it is a solvable problem, and if so, what are the best strategies for solving it.

That is part of the problem I have with the value-alignment language, though I think the language is getting much more sophisticated than in the early days. The value-alignment language makes it such a broad, amorphous problem that it is unsolvable, and at least the ethics language was an attempt to come up with strategies where you could put an overriding structure, whether you were talking about a rule-based system like the Ten Commandments or human rights violations or so forth, you could put an overriding system that was not simply about aligning your values with those you are interacting with and the amorphousness of all the values that are informing human interaction.

Again, I don’t think either you or I are going to solve that today, but I think it is a way of illuminating one of the critical issues going on, and that is that these models are being developed at an exponential rate with larger and larger sizes and more and more parameters, and yet we have not done the fundamental work of understanding whether we can put in proper restraints.

ELIZABETH SEGER: I agree. Two points coming off that: The first is this question of, can we even come up with a solution to some of these problems? That is why you get some experts who are going for pauses, who are saying, “Just don’t go any further, just don’t build an artificial general intelligence.” That is where that mindset comes from: “This is going too fast, and we don’t know”—one way I heard someone put it was they said, “We’re building the genie before we have built the bottle.” It is not just that the genie has gotten out of the bottle; we just forgot to build the bottle. Can we build the bottle is a great way to put it. That’s where these calls for pausing and taking a break while we figure out what do next come from.

Another point that comes off what you have been saying is that it also means that this is a point where we need to have some humility and think about how we don’t know the answers. So when someone says, “Oh, I know the answers; AI is going to do this” or “Open sourcing is always good” or “Open sourcing is terrible and it is never good,” we don’t have that information. We need to be able to talk to each other. We need these discussions to not get polarized and to think clearly about what steps to take next. Very good points to make.

WENDELL WALLACH: As you look at the EU AI Guidelines, which are now becoming actual Regulations, do you think they are sufficient in this regard? Or, if they are insufficient, what would you like to see the European Union implement?

ELIZABETH SEGER: I think they are an amazing first step. They cover a very wide range of technologies, and I am not an expert on all of the EU AI Act. I have been specifically involved with thinking about those that pertain to open source.

I am a fan of the steps that they took to set that compute threshold and say, “Above this threshold you have to do the safety evaluations.” They have set up the Office for AI to help oversee this. They are staffing up quickly right now. These are all great steps.

Like I said earlier in the podcast, this step of, “Here’s a threshold; above this threshold do your safety evaluations,” I think was a critical step, and we are seeing a similar thing happening with the U.S. executive orders. It looks like a lot of people are going to follow suit on that.

The next step is still lacking, and that is, okay, you do the safety evaluations and if you find something, what does that mean? It is still up to the discretion of the individual developers as to whether or not they release the system, so there is no regulation or guidelines that depending on what you find you should do X or Y with regard to model release.

I think that is the next step we need to take. I totally understand why they did not legislate that. We don’t know enough. We don’t know what we’re looking for, and that is part of the problem, so a lot more work needs to go into that, a lot of funding needs to go into that, and we are seeing some of that work start with NIST and the AI Safety Institute in the United Kingdom, which is heartening.

WENDELL WALLACH: So it seems that what they are saying is that at a particular size level you have to be doing the safety research, but they are not really penalizing you if you continue on your course without having done anything because there is no definition at this point as to what you are looking for and what regulations or what kind of restraints can be placed on you if you have not addressed these.

ELIZABETH SEGER: Not yet. There are transparency requirements, which could allow you to get slapped on the wrist externally. It might not be great for your reputation if you find something obviously terrible and then open anyway, but I think it is a great first step, and it took an incredible amount of work. Just watching this process, it was amazing everything that went into it.

I am happy with what has happened so far, but we do need that next step of working toward more standardized guidelines, and that is not just to protect us from potential malicious use of AI and dangerous capabilities. That is also to give companies and developers much clearer guidelines around what they are and are not allowed to do, and it offers some protection in that respect too—You follow the guidelines, you follow the rules, you deploy your model. I think that can be helpful.

There are some issues right now where developers don’t know what standards they are going to be held to and how much of an investment that is going to take. That can be tough to deal with.

WENDELL WALLACH: So at least we have a first step for them to know what standards they are going to be held to at this stage of the game, though I am not sure we are making the large developers accountable for what could go wrong yet. I think we have given them too much latitude in not having general product accountability. In the United States we did it through wanting in effect to stimulate innovation, so we lowered accountability, but that does not work very well when we get to this stage of development and the misuses start to become manifold.

ELIZABETH SEGER: I don’t know how sustainable that is either. There is something to be said for the fact that people are going to use this product and they need to be able to trust the product. That trust is very often grounded in institutional trust and institutional accountability, the assumption that the people who are developing and deploying these products are accountable and can be held liable if things go wrong. There have to be mechanisms for underpinning this trust in AI. I could go on about trust in AI for a while, but I am not sure that approach of doing away with accountability will universalize for long.

WENDELL WALLACH: Let’s pivot to your other great topic. People constantly talk about the need to democratize AI. You have parsed out what this means in ways that I think few have even thought about. Perhaps you can share with us your concern about this language of democratization, how people are talking past each other often with it, and what the challenges are for different stakeholders.

ELIZABETH SEGER: Absolutely. The term “AI democratization” has been thrown around a lot, and this ties in nicely with our previous discussion on open source, so maybe not quite a pivot as much as a smooth transition.

Oftentimes when people talk about democratizing AI they talk about it in the context of open-source discussion. The argument for open sourcing is to “democratize AI, to put AI in the hands of the people.” I think Stability AI’s tagline was, “AI by the people for the people,” and they often talk about democratizing AI.

About a year and a half ago I got incredibly frustrated because this term was being thrown around but nobody knew what it meant. It was being used as a stand-in for all things good. They were democracy washing model sharing and open sourcing, and that seemed a bit problematic. In the typical fashion of a philosopher straight out of a Ph.D. I took that frustration and channeled it into a paper and a blog post.

The paper and blog post that came from this were to break down this concept of AI democratization or democratizing AI into four different kinds of AI democratization. These are four different ways that the term is being used for different things that people mean and sets of goals that they are trying to achieve. I will break them down now.

The first one is the democratization of AI use. This is making the technology available for everyone to use, to benefit from, and to apply it to new applications. This is about unleashing AI into public services, to streamline services and provide better access to care, all of these kinds of topics around use. It is often the kind that when Microsoft uses the term “AI democratization” they often are talking about democratizing use. Their goal is to embed AI in every application and allow it to improve user experience with their products.

The second kind of AI democratization is the democratization of AI development. This is the democratization that groups talk about when they talk about model sharing and open sourcing. It is about allowing more people to be involved in the development processes to build systems to serve their individual needs. This is what Stability AI and Meta often talk about when they talk about democratizing AI. It is putting the technology in the hands of the people to manipulate, change, and build on, and to be part of that development process.

The third kind of democratization, which is used less frequently and often times more by civil service and academic circles, is talking about the democratization of profits. This is thinking about the huge amount of value that is going to accrue to AI controllers, to the big companies that are developing these systems and how do we distribute that profit, how do we distribute that value so that more people and more communities can benefit from it?

Some of the answer to that might be by democratizing development—let more people have their hands on the technology to build and develop—but there are other solutions as well. Some people have proposed different taxation and redistribution schemes. There is also an idea called the “windfall clause,” basically saying if companies had windfall profits, which would just be astronomical profits counted in terms of total percentages of world gross domestic product, companies would commit to redistributing them throughout communities in need and such. These are different ways of distributing profits.

The fourth—and I think the most important—form of democratizing AI is the democratizing of AI governance. This is democratizing decisions about AI, about how AI is governed, about who is governing AI systems, about who is controlling the systems, and about key high-stakes decisions like the decision about whether or not to share highly capable models that could be detrimental and that could have negative impacts; who is making those decisions? Should we leave it up to individual developers or individual CEOs or should there be more democratic processes involved?

I think this democratization of governance point is also important because different ideas around democratization can conflict. You can democratize development and potentially put a dangerous capability out there, and that might conflict with people’s values around democratizing governance and what they think should be done, so democratizing governance can take precedence as how do we figure out where values conflict.

The response to my extreme frustration around the use of this was to hash out these four categories. I think one of the key things that came from this was the observation that specifically when people talk about democratizing AI development and they are talking about open source, open sourcing is great for democratizing AI, but open sourcing is one form of model sharing, model sharing is one way to democratize development, and democratizing development is just one kind of AI democratization. AI democratization is a much more holistic project.

WENDELL WALLACH: Do you have any particular thoughts on where you would like to see the governance go? Are there particular things you would like to see that the European Union, the United States, or perhaps the United Nations do toward governing these models?

ELIZABETH SEGER: That is a huge question and in a way the question of AI governance.

I think in terms of governance mechanisms, a big question we need to address is who is making the decisions, who is influencing the decisions. There is a lot of good work happening around corporate governance models trying to understand, short of legislating and saying companies are required to do X or Y, putting democratically informed boards or decision-making bodies in place to help guide the decision making within companies. I think that is one approach.

There is a lot of interesting work also being done around deliberative and participatory democratic processes to inform new legislation, like around funds that are being funneled toward AI safety research or AI infrastructure development and how should those funds be directed.

A lot of these are new ideas about how to implement new democratic structures into governing tech, but I think they are starting to get a bit of a foothold. I am going to be excited to see where that goes in the next year because I think if you really are committed to the idea of building AI for the people and by the people and putting AI in the hands of the people to benefit them, it is key to allow people to have a say in how these decisions are made. As great as open sourcing is, not everyone can participate in AI development, not everyone has the skills or the access, and democratizing the governance processes is a way to allow everyone who is going to be impacted by AI to have a say.

WENDELL WALLACH: At the end of many of our podcasts we ask a couple of generic questions, both about the good, the desirable, and the fearful aspects of AI.

Let’s start out with the fearful aspects. Is there anything that keeps you up at night that you are particularly worried about in the deployment of AI?

ELIZABETH SEGER: One thing that keeps me up at night is my cat snoring. He does snore very loudly.

WENDELL WALLACH: They are not artificial at this point.

ELIZABETH SEGER: Not at this point, but it is quite impressive.

Besides Ziggy snoring, I think the thing that keeps me up at night around AI is that we get the question of deployment wrong. It is where we started this podcast, but we have a delicate balance to deal with here. If we control too tightly, we put undue burden on smaller developers and we cut out developers and countries from the AI game, which can lead to huge social and economic disparities. It can feed into regulatory capture by Big Tech. There would be massive problems if we control too tightly.

On the other hand, as AI capabilities increase, if we control too loosely we could be putting some potentially very dangerous technologies into the hands of malicious users, and that would not end well either, so I think the thing that keeps me up at night is how are we going to get this right.

WENDELL WALLACH: From what I can gather from this podcast that doesn’t just keep you up at night but has been the focus of a lot of your expression and work.

ELIZABETH SEGER: Yes. It is all-consuming at this point.

WENDELL WALLACH: On the other side of that, what makes you hopeful?

ELIZABETH SEGER: What makes me hopeful are some of the amazing applications that we are seeing developed with AI right now that we can start implementing, especially around the deployment of AI to help streamline public services. I think there are great applications for healthcare specifically, delivering public services, and streamlining processes so that patients can get more one-on-one personal attention from doctors and that overall improves the experience. Especially here in the United Kingdom with the National Health Service I know that is a key focus. I think some of the applications in public services are going to be great and figuring out how to get those implemented. That is an amazing benefit.

I think there are also some very cool things AI could help us do around improving democratic processes and structures and helping to moderate conversations. Platforms online where we would be able to have conversations have huge democratic input and participatory input to tackling difficult questions and having an AI moderator help consolidate those ideas and summarize them in a way to have much more people-oriented policymaking. It is sort of an “out there” idea, but there are actually a lot of groups working on this and some of the great ideas coming out.

Those are two areas where I am really excited. So while I think we definitely need to be worried about what some of these technologies could do down the road, don’t get so bogged down that you forget that there are some amazing things they are doing now and will continue to do in the future.

WENDELL WALLACH: Thank you ever so much, Elizabeth, for sharing your time, your insights, and your expertise with us. This has indeed been another rich and thought-provoking discussion. Thank you to our listeners for tuning in, and a special thanks to the team at the Carnegie Council for hosting and producing this podcast.

For the latest content on ethics in international affairs be sure to follow us on social media @carnegiecouncil. My name is Wendell Wallach, and I hope we earned the privilege of your time. Thank you.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

JAN 23, 2024 Podcast

When Science Meets Power, with Geoff Mulgan

In this special episode, Senior Fellow Anja Kaspersen speaks with University College London's Professor Geoff Mulgan on the trends shaping technology's impact on society.

NOV 28, 2023 Podcast

AI and Consumers, with Helena Leurent

In this far-reaching conversation, Consumer International's Helena Laurent and Senior Fellow Wendell Wallach outline the challenges that AI poses to consumers.