Can We Code Power Responsibly? with Carl Miller

Sep 6, 2023 60 min listen

In this thought-provoking episode of the Artificial Intelligence & Equality podcast, Carl Miller tackles the pressing question: Can we code power responsibly? And moreover, how do we define "power" in this context?

Diving headfirst into the complex intersection of artificial intelligence and power dynamics, Miller, author of The Death of the Gods, warns against ascribing human-like understanding to AI systems and applications. He posits that power enables us to discern how lives are being shaped, identify the architects of change, and realize what has diminished in influence and importance over time. Reflecting on historical cycles of innovation and upheaval, Miller expresses cautious optimism, anticipating that humanity will navigate the complexities and harness these potent tools with sensible control.

Tune in for a rich discussion that promises to fuel a deeper reflection on the evolving landscapes of power and technology.

Code Power Responsibility AIEI Spotify podcast link Code Power Responsibly AIEI Apple podcast link

ANJA KASPERSEN: The theme of re-envisioning ethics and empowering ethics for the Information Age has become highly significant for both the Carnegie Council for Ethics in International Affairs and the Artificial Intelligence & Equality Initiative (AIEI). Central to this exploration is a deeper understanding of the evolving power dynamics, especially those driven by technology and AI. This theme permeates numerous episodes of our podcasts.

To delve into this topic more profoundly, we have sought insights from individuals deeply immersed in comprehending the impact of technology and AI on humanity, society, and the environment. Of particular interest is how this evolution reshapes our perception of power and its implications for technology and AI governance. Our distinguished guest today, Carl Miller, stands out as a leader in this arena. I am truly delighted to welcome him.

CARL MILLER: Of course, that is very flattering. Hi there, Anja. Hi there, everyone. Thanks so much for having me.

ANJA KASPERSEN: Carl is a pioneering digital researcher and the co-founder of the Centre for the Analysis of Social Media at Demos, an independent research organization dedicated to studying the virtual realm. Additionally he serves as a visiting professor and research fellow at King’s College London. His 2018 book, The Death of the Gods: The New Global Power Grab, offers an insightful analysis of how the Internet, social media, technology, and AI are reshaping global and individual power dynamics and historical patterns informing the associations between power and technology.

Before delving into the captivating technological issues that have engaged both you and me for years, Carl, I would like to provide our listeners with a deeper glimpse into your background. What drove your interest in technology, especially from the perspective of warfare, which intriguingly links our two stories.

CARL MILLER: Yes, it does.

My background is a slightly tangled one. It is always slightly challenging to turn it into a story, but basically I started my life in the War Studies department at King’s looking at warfare as an academic and a researcher. I was fascinated back then with this kind of flip that happened in my early career away from conventional wars and over to counterinsurgency conflicts. My initial jobs were around counterterrorism and counter-radicalization. That back then seemed to be the big challenge that we were all having to face, and those seemed to be the questions that we needed to have answered.

Now we have looped back, haven’t we? As we have and as the geopolitical arena has become hotter and as state-on-state conflict has become a much more pressing reality again, the intervening years that I spend basically studying social media have rejoined back to power, so nowadays I spend most of my life trying to understand this strange, murky world of information warfare, about how information has become strangely a theatre of war. It has joined air, sea, land, and space as this place that warfare happens within. I have thought for the last four or five years that that has had the most utterly profound and important implications for the worlds that we all live in because it has brought all of us—our ears, our eyes, and our minds—to this strange new front, whether we like it or not.

ANJA KASPERSEN: Building on what you just said, a few years ago you posited a very compelling notion: “Power lets you zero in on how lives are being shaped, who is doing the shaping, and just as importantly what has become less influential and important than you think it was.” I personally find that statement to be profoundly true, talking about new theaters of warfare, new political discourses, or how we think about technology in general, and I would appreciate it if you could expand on this statement. I believe it holds significant food for thought that could greatly benefit our listeners.

CARL MILLER: I will chomp on this food for thought for a moment. Thanks for asking this because I always thought that power is a marvelously useful idea, especially in moments like ours that we are currently living through. It is also very slippery, and we use it to describe a whole array of different things, from coercive power and force through to economic power and incentives all the way through to ideational power and the power of persuasion, ideas, and arguments. What power really is deep down is the ability to shape the world and therefore our lives in our hands and, in the hands of others, of course, its ability to shape our lives in turn.

I started to think seriously about this back in 2017 and 2018 when I was writing my book, and I have dwelt on it since then. I think in moments of great revolutionary change it is an extremely important idea to turn back to, and we always do. Think about discussions around the commanding heights of the economy and the means of production and who owns it during the Industrial Revolution, think about people debating the authority of the state in the 19th century as the modern state emerged, or think in the 1960s about scholars like Foucault trying to expose the hidden operations of power and language. Each and every time the world changes in a big way power changes, who has it, how it is being used, what it is being used to do, and even what it looks like. They all reshuffle and redistribute.

Focusing on that and asking how our lives are being shaped, what those forces look like, and what they mean I think allows us to make sense of moments of revolutionary change. That is what I am doing right now because I think we are living through one of those moments. I think AI is probably the leading force which is reshuffling everything now, and it seems to me right now—and this is an irony in a world so awash with data, with so many algorithms, and so much analytics—that we are living in a very mysterious moment. I see our horizons actually drawing in. I think our future is becoming less known, even perhaps less knowable, so this is exactly the moment when focusing on an idea like power lets us peer a bit into the future as well and hopefully make sense of how our lives are being reshaped in that way.

ANJA KASPERSEN: In dealing with this uncertainty, this mysterious future that we are entering into, do you find power to be a helpful framing, but is it also a framing that is understandable to people? Do you find that people are receptive to this way of looking at it?

CARL MILLER: I think so. I think power, when it is in operation at least, is something which, even if it takes very weird forms, is ultimately something that everyone can recognize. A hacker might use weird forms of power to hack into a computer. I remember as part of my journey around the world—and what I was trying to do, by the way, in all of this was to actually get onto the frontlines of power. I don’t think you can simply Google an inquiry like this. You need to get out of your room, get out on the road, and actually meet the people and go to the places where power is being fought over, won, lost, reshaped, competed for, and used.

It might be in strange forms, but you can see it happening to you. If you are a hacker, you can feel this new form of power, if you are an open-source investigator you can, if you are someone whose life has been completely shaped in one way or another by AI at the moment you can, so I do think it brings it back to people. When you, for instance, start focusing on, say, power in the workforce, you actually can use that to connect to the very real immediate concerns that I think so many people have around AI right now, which very practically is like: “Am I going to have a job in the future? Am I about to get automated? Am I about to have my wages cut? Am I about to be competing against a machine somehow?”

Yes, I think ultimately it is very, very practical. It might feel abstract, and in a sense of course it is because we can apply it in so many different realms of our lives, but the answers it gives us matter to us all.

ANJA KASPERSEN: Before I move onto a different topic, I am going to pick up on one thing you said. You referred to hackers, and I remember very clearly from your book that you actually have this very interesting statement where you describe hackers as a “new kind of ruling class,” if I remember correctly, which I thought was interesting, and I think you are making a very good point. Can you speak a bit more about that, about systems that are highly cyber-reliant, et cetera?

CARL MILLER: You go to Las Vegas, and you are hanging out at Snackus Maximus in Caesar’s Palace, and you are in the dry Nevadan heat, and then all of a sudden down the road comes this whole new kind of player that suddenly arrives. They have got shock-blue Mohawks and blockchain tattoos, and they have code jokes written across black T-shirts. Once a year every year DEF CON comes to town, and tens of thousands of the world’s best hackers all descend on Las Vegas. It is a riotous week where they talk, exchange, and compete with each other.

On these glowing stages, the world’s best hackers will reveal the year’s most fiendishly clever hacks, and on that stage I saw the most unbelievable things. I saw hackers hack into laptops using light and sounds and light you cannot see and sound you cannot hear. I saw hackers demonstrate that they could cause wind turbines to burst into flames, all kinds of operations of power which were both weird and unmistakable at the same time.

They were also a community. They had shared values, not political values for sure because there are Nazi hackers, hippie hackers, Putin hackers, and Ukraine hackers. Every major conflict has hackers on both sides, but they were united in the relationship they had to technology, and in that way they also were very, very different from the rest of us. To them these black boxes that we carry around in our pockets are not closed, not unknowable, and not mysterious. They are full of procedures, chips, operations, and processes which are readable to them, cognizable, and therefore manipulable. They have a completely different relationship to the world via technology to the rest of us, which I thought was a new source of power.

I thought, Yes, the most capable of their number or this strange community are like a new ruling elite. They have got very rich, they have got very powerful, and they can do all kinds of things that the rest of us cannot really do. That I think is a great illustration of a more abstract point I was trying to make. One of the things we need to do at the moment is to track down the realities of power today, especially when they are overlooked, weird, and misunderstood. There in that community of hackers we have a great example of that.

ANJA KASPERSEN: You could go a bit beyond the community of hackers because I think you are right. There is definitely a lot of informal power in having those skills and capabilities in our information-reliant reality right now. Do you think that there is a new type of ruling elite that goes beyond it to share the amount of money, data, and resources that they hold?

CARL MILLER: There is one concentration of power, which will not be a surprise to you or any of your listeners here, which is unmistakable, and that is in that small, shallow-bottomed basin in California in the concentration not just of money but of decision-making power—and moral power in a sense—to shape the technologies and therefore our lives from Silicon Valley. It is unmistakable.

That is not a story that begins or ends with AI, of course. That is a story that goes back all the way to the 1940s and 1950s, but it is now being extended to AI too, and what we are seeing are the network powers of the very large Internet companies of the early Web 2.0 age now being translated into the kind of leadership of frontier models in AI. I think that is one thing which is already very clear. There is really only that place which is breaking new boundaries and breaking new frontiers when it comes to artificial intelligence development. We can delve into that if you like. I think there are some very interesting forces at play.

Wherever you look—and this is I think the strange characteristic of our age—you can see power concentrating and also in a weird way diluting at the same time. I think this onrush of power concentrating and also the massive liberatory potential of technology is the one which I have been convinced by more than anything else, so right alongside the winner-take-all behemoths of Silicon Valley you have someone like Eliot Higgins, who sets up Bellingcat and goes from arguing in the comments page of The Guardian to becoming a kind of icon of open-source intelligence and an example of a whole new form of journalism.

Or you have the “digital democrats” of Taiwan, people who begin by protesting outside of their parliament, literally locked out of the political process in 2014, and then via the Sunflower revolution—they would call themselves “civic hackers” I think—coming into power, into government in Taiwan, and building a digital democratic process to cast, shape, and ultimately create new law. It is a strange, multi-headed beast, this distribution of power. It is certainly not linear.

ANJA KASPERSEN: Let’s dig a little deeper into what you just said about these new power elites, and then we will come back to the Taiwan example because I understand you just received a grant to look more deeply into these experiences as well. Can you start with new technological elites that hold enormous amounts of power but without the formal accountability mechanisms?

CARL MILLER: That is 100 percent true. In many ways the last ten years of my life have been spent trying to change that, trying to convince governments that they need to step in and bring social media platforms, digital spaces, and technologies under properly constituted democratic governance. It is unbelievable how long it has taken for governments to be convinced of that case, and until that happened what we had is either these companies discretionarily making judgments themselves as to what they think the corporate risk is or what they think is reputationally too threatening or in fact to be shaped largely by a single American political constituency, both of which are totally unacceptable when you consider that these are platforms making some decisions that are literally matters of life and death and also that they touch the sheer variety of the world in ways which no single polity, no single country, should ever manage or should ever shape unilaterally.

That is the story of Web 2.0. Sticking my neck out and being slightly optimistic, I think that experience is exactly why we are not about to see it repeated again for AI. Firstly if you think about it, social media, however damaging it has been to have that in a wild, ungoverned state, is tame compared to what AI is going to do. AI as a technology is not advertising. It is not even remaking the social world. It is far faster than that. It is going to reshape almost an unimaginable number of things.

ANJA KASPERSEN: Including what it means to be human.

CARL MILLER: What it means to be human, what it means to be a machine, how we talk to each other, how we create, how we connect, and how we work. The mind boggles as to what is not going to be reshaped by AI.

First, governments and politicians recognize that they cannot afford to stay out of governance, and second, we have now reached the point—12 years too late, but nonetheless we are there—where we are stepping in and regulating. At least the European Union is, the United Kingdom with impending legislation, and likely the United States at some point as well, so the whole political environment has changed. AI, much like Web 2.0 now, cannot go under the radar anymore, so I do not think we are going to see a laissez-faire position from governments. We are certainly not because we already have legislation in China, legislation in the European Union, and likely in the United Kingdom and United States sometime down the road.

What we are going to see is something else, which is just this sheer challenge of how you either regulate or legislate or do something else as a government to something as fast-moving as AI. I think that is going to be the cross-cutting struggle which will define that power dynamic, which, by the way, I would say is by far the most important power dynamic in AI, whether and how it, as a form of power, is confronted by counter-powers created by governments.

ANJA KASPERSEN: That is an interesting point you made, Carl. In your book, which came out a few years ago now, you expressed a view that within the realm of traditional institutions and centers of powers: “Technology weakens the very structures and institutions that determine the allocational power, and in doing so it undermines the established norms dictating the legitimate exercise of power.”

I find this concept intriguing and believe it serves a suitable gateway to delve into more of the fundamental thesis that underpins your work. I would like you to expound on this concept, these ideas, and in doing so maybe also provide us with an even deeper understanding of the history of power and the association with technology over time.

CARL MILLER: I became interested—I think everyone is in one way or another—about how power is controlled. The way I imagined that in my head was that wild power is a frothing, writhing thing. In the hands of us all it can be used to do all the things that human beings always do to each other. We can be wonderful to each other and absolutely horrible. Power is used to do all of that.

One of the things that we as civilizations always try to create is a way to control power, and I imagine that as the bars on the cage of this wild thing. Those bars can look different. Laws are the most obvious and most coercive way in which we control power, but norms are super-important as well, professional codes, ethical frameworks, standard rules of practice, and a million different things, and all of them form this civilizational mesh which basically defines in a slightly woolly way what power can be used to do and what it cannot. When we see big shifts in the terrain of power, either in who has it or what it actually is, it seems to me that it manages to wriggle around those bars in new ways.

That is what I saw coming back from this journey around the world that I took to try to understand power, that it was in this wild state—new forms of monopoly that are not defined by monopoly law as such in a new way; information warfare as a form of warfare that is being fought outside the rules of war. Weirdly wars are a very rules-based activity, but we do not have rules of engagement in the information space. It is completely different than what we have for conventional, kinetic conflicts. In politics we have digital electioneering but there were not any rules set around that, and it seemed to me that all kinds of things could be done to try to win elections online that you were not allowed to do if you are doing the same thing in an offline analogy.

In all of these different parts of our lives—business, politics, warfare, love, life, and all of it—it seemed that power had taken these wild forms, and I think now with AI even more so.

The real struggle to me was never going to be about the technology itself. It was always going to be this squishy, human, art and science of remaking those bars: How do we recast the norms? How do we create more and better laws? How do we regulate effectively? How do we do all of the things to basically put power back in its cage?

ANJA KASPERSEN: Power is one of those concepts that, depending on who you ask, especially in political science, you get a slightly different answer.

One of the things I have been worrying a lot about, and I have said this in some of my work, is this notion of who decides, who gets to decide. What are you seeing from your end and where do you see democracy’s place in all of this?

CARL MILLER: That is a wonderful question, who gets to decide? This is a great moment for me to talk about the project that you alluded to earlier because we are trying to create rules in ways, ways that do not rely on specifically elected political leaders but give space for everyone to be involved. This is happening right now, and it is something that anyone listening to this if they so wish can get involved in themselves.

The grant is from OpenAI, and the idea is to try to build new ways of setting rules to govern AI. These have to be fast, they have to be constantly evolving, and they have to include a great number and diverse range of different people in order to be at all meaningful.

We have teamed up with vTaiwan, which is the Taiwanese “civic hackers” that I mentioned earlier. I would say—and I am a superfan of their work—they are the most successful digital democrats in the world. They have built a process to create a consensus-seeking online debate, and through a mixed-strategy process, including actually meeting up in real life—Gasp! Shock!—and getting to know each other better, that turns into law. They have managed to use this to break through extremely incendiary and very polarized issues.

The first time they rolled this out was with Uber. If you remember back to 2016 and 2017 there were strikes around the world about Uber. There was regulatory action. You had Uber drivers on one side, taxi drivers on the other, and commuters in between. It was a very, very angry debate.

They used a platform—the same one we are using now—called Polis. It is marvelously simple with a small engineering tweak that changes everything. You get onto this platform and you are asked questions—we are going to be asking you questions about AI and governance—but instead of showing the matters on which people most disagree or get people angriest or get people re-tweeting the most, it begins to surface statements which are most consensual and give those the most visibility. What that does is it gamifies consensus, so people start trying to craft messages which they do not just think are playing to their home base but are actually going to find some level of support amongst other people as well.

There is a very clever technology rumbling under the hood of Polis. What it is doing is drawing us all into an attitudinal map so it can see the different tribes that emerge, and that is how it is redefining consensus. It is not just the sheer number of people; it is people from the different tribes who all begin to agree. It has a very, very well proven track record of basically unearthing, excavating, consensuses that are normally hidden by the froth of angry debate.

Our idea is to get all the different tribes in the AI debate into a single space. You have your gloomers and your boosters and your doomers and your boomers and AI governance people, your engineers, and simply confused, frightened, and maybe excited people in the middle, everyone, and we get them into this same kind of consensus-seeking environment and see whether we can find hidden consensuses which we cannot see at the moment because of tribalism and the way in which normal social media platforms work, excavate them, and if we can that is going to be something that is fed back into OpenAI, which we will publish and write about. We will use environments like Chatham House’s task force on AI to try to feed into the policy and governance debate and do our best for all the people involved to create meaningful outcomes from that debate, so it is not just a talking shop. We are trying to turn out something which has moral weight, consensus items in and of themselves.

That is our promise to them. We will do our best, and that is something which will continue to live on. The idea is recursive public, which just means that it is a public which continues to form, shape, and hopefully expand.

ANJA KASPERSEN: That is very interesting. Also, we cannot have a conversation about power without talking about large language models (LLMs), generative AI, and the power that these models generate. Do they represent something different to us in terms of power, who gets to hold it, and who gets to express it?

CARL MILLER: Firstly we are already talking about a pre-GPT age and a post-GPT age. I think it is unmistakable now that in November when GPT suddenly rolled out it was one of the most important moments in the history of AI for different reasons for different people. For even the technical experts the progress was staggering. There have been all kinds of engineers telling me that they did not expect to see anything like this in their lifetimes. The progress was definitely nonlinear from the past. Also that progress was being made in ways which benefited a very small number of very large companies. The center of gravity has shifted away from universities and away from small research outfits and basically to a very small number of companies. The approach is quite brute force, as I understand it. It was the same approach as before, but you get this exponential increase in performance when you throw a very large amount of data at it.

ANJA KASPERSEN: It is not dissimilar from the history of warfare, when you throw a lot of brute force into a situation.

CARL MILLER: Exactly. Scale has a quality all unto itself, and that is what we saw here.

It is unmistakable, at the moment at least, that new frontiers are being broken by a very small number of companies—OpenAI, Facebook, Claude, Microsoft, Anthropic, Hugging Face, and a few others, who are able to combine very large amounts of data, computing power, talent, and money. It will be totally unacceptable for important moral decisions regarding AI to be made by those companies. I do not think any of us want that, and I don’t think they want that either. That is why I think these grants exist, for a way to make these decisions in a more equitable, broader-based way.

I do not think, by the way, that is the resolution to the problems created by these companies being so powerful and so rich. I think that creates all kinds of other problems as well. One of the things I do worry about is that, if the same forces hold true, we might see some agglomerated mega-corp emerge, a single organization that has an unassailable advantage in AI. That is the possibility that some natural language processing (NLP) researchers have flagged to me as being one of their main worries.

What I would like to do with this OpenAI grant is connect anyone from the outside whose life is being touched by this technology with the engineers within the companies that build it and try to use a consensus-seeking debate around the rules to govern it to bridge together a whole bunch of different communities that probably feel very mysterious to one another right now. It has always been the case that people building disruptive technologies do not have a great idea about how their technologies are being used around the world and what they are actually doing. I think there is case after case of that, that being the case with social media. Likewise there are plenty of people who would benefit from having the internal minds, thinking, processes, and challenges of the companies be demystified and made a bit more transparent, to make that boundary a bit more porous.

ANJA KASPERSEN: But if the very institutions that we usually trust to exercise that power are being weakened by it, where do you see that alignment happening?

CARL MILLER: I think the answer to that is both simple and extremely difficult at the same time, and that is to build new institutions. We have to build new and different bars on the cage. I have always felt that leaning into the new places where power is being used and thinking about how we might rebuild the institutional fabric to govern that has always been better than simply trying to automatically strengthen the institutions that we already have.

The reason that I was excited by Digital Democracy was because parliamentary forms of democratic representation were not doing it for people anymore. In Taiwan the thing that precipitated the rise of digital democrats was a perfectly parliamentary crisis in representation. The ruling party had the votes on the floor of Parliament to whisk through this trade bill that many feared in Taiwan would draw them closer to China. They had the votes, no one was saying they rigged that election, but so many people felt that they were shut out from that vote—academics, activists, writers, and all kinds of people who had much to fear from being brought closer to China, likewise open-source intelligence (OSINT) enthusiasts and citizen journalists.

I do not think our reaction to that should naturally simply be, “Well, how do we strengthen traditional mainstream media organizations?” I think that is important, but we also need to create new rules for OSINT people, create new rules for civic societal journalists, rebuild the kind of journalistic code in a way which makes sense to them as well, so institution building. Give me a Royal College for algorithmists. Give me professional codes for technology developers. We need to reach the institutional fabric into these new areas. That is how we are going to get through this.

ANJA KASPERSEN: You have engaged a lot with policymakers, with parliaments, and with governments on these issues. Do you feel that we have the right people in the right places to be able to grapple with these new realities?

CARL MILLER: I think we all need to upgrade our understanding of technology. I think that is incumbent upon all of us.

It is true that politicians live in a very, very hostile environment, and that means that certain professional backgrounds allow politicians to survive. I do not think it is an accident that so many politicians are essentially communications professionals. They are either professional politicians throughout their career or they have come from a place like journalism or think tanks because it is exactly those people who can survive in an environment which I think is quite hostile. That is not the politicians’ fault; all of us create political environments.

Saying that, I think there are loads of great politicians who are perhaps not engineers or machine learning developers. I do not think that is what you need to be to be a representative because they have civil services, they have secretariats, and they are supported by technical advisors. The thing that legislators need right now is to consider, to act, and to represent.

ANJA KASPERSEN: When policy people come to me for advice I always ask them to hone in on the most important skill, which is asking good questions. For some reason with AI—because I think it is easily dealt with as something quite abstract, something mathematical—people become fearful of it, but this is exactly the moment we should be asking questions.

CARL MILLER: For sure. I have got a podcast coming out on AI and power pretty soon. I am by no means an engineer of these models. I would say to everyone who might feel like they do not have anything to say on this because I am not an engineer, the engineers do not know what is going on. If you actually talk to the people who are training these foundational models, the best way it has been described to me is that it is like medieval medicine or alchemy: They have some experimental results. They can see things going in and then things happen, but they do not have germ theory. They cannot explain or diagnose what is happening in these models. It is like looking into the mind of an alien. This is a foundational model developer.

There is not a technical body of skill at the moment that suddenly clarifies everything and allows us all to see things so clearly. All of it is mysterious to everyone, and I think we are all a bit bemused in the middle, and everyone is going to have to bring their own skill sets into this.

I worry the opposite. On the one hand, what you say is completely right, and we obviously need engineering and technical expertise in the decisions that are being made on a political level. I would also love a lot more humanities people to be in the decisions being made on the technological level. The real danger to me is a kind of engineering monoculture full of people who have never seriously tried to study or understand human beings making decisions which have everything to do with human beings. That to me is just as dangerous as politicians not understanding technology.

ANJA KASPERSEN: I could not agree more. I think there needs to be a lot more investment in that.

One interesting question that comes up because you mentioned the technology companies, OpenAI, etc. One of the ways we have been trying to grapple with everything we do not understand about these models is “red teaming.” I think you referred to it earlier in your comments as well. Red teaming and the results that I have seen from various red teaming exercises—there was one just now at DEF CON, which you mentioned earlier, looking at trying to grapple with some of the uncertainties coming out of especially OpenAI’s model, do you think red teaming done correctly can help us understand the assumptions that underlie these foundational models?

CARL MILLER: There is a role of red teaming as you say in DEF CON. I think unprecedentedly each of the foundational model developers made their code open and allowed some of the world’s best hackers at them to try to see how they could break, manipulate, and cause various things to happen in the models. That is a part of actual cybersecurity testing, super-essential, but yes, also societal red teaming.

To me maybe one of the worries is because these models are open—the models are not but access to them is—if you look in the Global South in places like India you are seeing all kinds of unbelievable uses of them in the name of access and empowerment, but of course people are worried that all of the bad actors are getting their hands on them as well. I do not think we have a great sense in the policy and research communities what all the bad uses of these models are yet. You hear the same things, like deep fakes for disinformation. If I was a bad actor and I was seeking to use generative AI to influence people online in bad ways, I could think of much juicier and I think more powerful uses of AI than just cranking out some deep fake videos and spamming people with them. I think that as a red teaming exercise is something we should badly do quite soon.

ANJA KASPERSEN: My understanding is that there are some red teaming exercises on these issues but most of them are done in-house, and what we have seen so far is that there have certainly been those in the tech world and leaders in the tech world that have come back to government officials or to government-led processes and said: “There is nothing to see here. We have it all covered, we can govern ourselves, and policy people would never understand what it is anyway, so let us govern ourselves.”

CARL MILLER: That has always been the problem with trust, safety, and integrity being driven by the platforms, by the companies. At the very best, there is always a mixed incentive there. You do have genuinely amazing people in these tech companies who do great work and truly want to fix these problems or do whatever they can, but that always collides with public affairs, corporate risk, and legal, and you never quite know when they emerge exactly what is motivating the exercise and is it something which is actually being deployed primarily to avoid a certain kind of regulation or avoid a certain kind of risk rather than something to actually fix the problem.

I have said this for ten years now: Companies are not the right structures to navigate that kind of mixed incentive. We have never expected a company to reconcile those different things internally. It cannot happen. Ultimately they have fiduciary duties, and they are going to be driven by corporate growth and survivability above anything else. It would be definitely unacceptable for this to be slung over the wall as a public affairs tactic. Likewise, as long as the risks are being borne by society and all of us they cannot simply be things that are done internally in the company. They have to be things that a far wider array of people are involved in.

Again—sticking my neck out—I think the good news is that I would be utterly astonished if politicians allow that kind of argument to fly right now. I recognize that I probably speak to the politicians who are most likely engaged in these kinds of issues, but I have not spoken to one who thinks this is something which the companies can just be allowed to continue with themselves for a while. That does not seem to me to be the political weather. It has become increasingly apparent to me that governments themselves see the models, the data, and the talent as like a new form of the coal and steel industry: It is an absolutely fundamental part of geopolitical advantage. It is something they feel they need to have, and I think countries that do not have foundational models and do not have the capacity to build them are freaking out behind closed doors.

That is the other kind of pressure here on governments aside from just controlling the tech. The United Kingdom for one is absolutely desperate to create an environment which is open to innovation and which is attractive to technologies to come here, so they are walking a tightrope. I think the United Kingdom is exactly the kind of country that could be part of the AI revolution but also might not be. That does not feel like it is settled yet. It is certainly not settled that London will be a hub for the most important breakthroughs in AI to happen.

ANJA KASPERSEN: Do you think that appetite to be ahead, to be recognized as a leader, could make us accept risks that otherwise would not have been accepted?

CARL MILLER: For states, yes. One of the clear messages I have received is that the threat of over-regulation in the eyes of governance is real. Many technologists said to me that China already is not a hub of AI innovation for exactly that reason. All the fire and momentum in Chinese AI has left as they stepped in with a very centralized form of regulation and control, and I think regulators worry about that. I think it is going to be a balancing act.

Probably in the eyes of states it will be seen as risks on both sides, of course risks around AI and related technologies being used in ways which are bad but also the very real risk that you will create an environment which then does not allow you to have in your territory the things that you need to have in order to be safe, sound, and sure about this being capacity that you can use. I cannot imagine, for instance, that we are going to see a serious use of frontier models in defense from companies based entirely somewhere else. That is not what digital resilience looks like to a state.

ANJA KASPERSEN: What do you think we will see in defense?

CARL MILLER: I will make a prediction because I like predictions, but I will caveat that I am by no means a defense technology expert or researcher in any way. I think there is an enormous amount of activity now being put into national-level AI capabilities that will be used exclusively perhaps by statutory and defense sectors.

ANJA KASPERSEN: Before we conclude, you mentioned geopolitics, and I know that your research encompasses all of the world. You are looking at these trends not just in Europe or the United States but also elsewhere. How is power being looked at in the different regional contexts? Do you see regional differences? Also, how is the association between technology—and maybe AI in particular—and power being grappled with?

CARL MILLER: Let me go back to India, which is a place that is being very energetic and innovative in beginning to use large language models. The thing that first strikes you when you delve into that world is the sheer uses of this. Here, sitting in London, you cannot imagine. They are truly liberatory in some cases. You have LLMs being used to translate government services into a whole range of different things, to allow small farmers to navigate labyrinthine Indian government bureaucracies to obtain certain grants—that is all being done by various chatbots—also good jobs being created to mark up various kinds of data sets to allow these models to increasingly understand non-English data sets and various kinds of dialects specific to places in India and countries in Africa.

On the other hand, everyone knows that these powers are close to them, these models are open, but they are also very far away. They know that they are in Silicon Valley, and I think there is a concern that a repeat of Web 2.0 will see lots of moral decisions being taken by the companies which are shaped by moral worlds and political environments very distant from their own. That is one concern which has been said to me a number of times. It was very hard for countries outside of America and I think Western Europe to get the social media companies to truly pay attention and make important decisions reflecting their concerns, and that meant that when they make decisions they were very much being molded by the U.S. jurisdiction and U.S. moral precepts. It would be very, very sad if this story goes the same way.

ANJA KASPERSEN: We had a guest earlier in this series of podcasts who spoke about “digital colonialism,” particularly looking at Southeast Asia. Also, to the extent that these language models, because they are becoming quite proprietary, many countries do not have the capabilities, the skill, or the money to invest in them, and they are becoming more costly by the day to create, and if these language models actually become a tool in this new form of digital colonialism. Do you see this in your work as well?

CARL MILLER: You could call an extension of jurisdiction into other areas of the world a form of colonialism for sure. In general, yes, you do see extractive uses of these technologies, you see people being paid allegedly 80 cents an hour to mark up horrifying child abuse material. Again, that is not a story that begins and ends with AI. Content moderation has been exported in that way for a very long time. You see ultimately the data and the money and the control flowing all in one direction.

That is the moment that we are living in. I have very rarely spoken to someone who does not end their sentence extolling all the liberations but then talking about the many risks, and I also have not come across people who have not, once having gone through all the risks, been actually being super-excited by a particular use. It is mysterious, scary, and thrilling all in the same moment.

ANJA KASPERSEN: Can we code power responsibly?

CARL MILLER: No. These things represent power; they are power. I think this is a real trap. When you try to code empathy and maybe power as well into these models—they feel so life-like when you talk to them, they feel so human. It is hard to not think there is some glimmering sentience in their somewhere peering back at you, but the best way of understanding them is that they are sophisticated auto-completes. These are models that are much better at sounding intelligent right now than being intelligent. There is no glimmering sentience. There is no understanding of any concepts in there. They are just really, really good at understanding what a human might say back to it. You ask if it is sentient, and it will say—if you talk about power to it, it will talk very intelligently back to you, but it is not holding these concepts in some kind of computer brain as anything which it is actually trying to understand.

Saying that of course, it is very worthwhile to lastly reflect on the sheer pace of change. We are getting close to much more general uses, well, we already are with ChatGPT and the others, very general models now which do not look like the AI of the past. They are miles away from understanding, but they are also moving many miles per hour at the same moment.

ANJA KASPERSEN: What are the temptations that we should try to avoid given the power these technologies allow us?

CARL MILLER: We should avoid simply using them in ways which we think are consonant with our own personal ideas of good or bad.

This goes back to wild power. Seeing liberations and impressions around the world, you can see how often things which appear extremely oppressive, restrictive, extractive, and exploitative to others in the hands of the people doing it seem fine. Very few people in this world think they are the bad guys. I think that is always the danger with wild power. In each of our hands we can use it with very little penalty or control in ways that we wish, but actually that is quite a bad thing, and the whole reason of creating bars is that they are common bars, so common laws, common norms, and common understandings.

Right now I think the temptation with AI is just to go ahead and use it in the ways that we think are right, but at least until we have managed to drag new institutions into being, to bootstrap new projects, to correct new processes, workflows, even new norms at some point I am sure as well, we should try to widen our own moral world a bit to think about how the uses of all this tech are going to seem, appear, sound, and feel like as it touches the lives of other people and obviously in worlds very different from our own.

ANJA KASPERSEN: Have your views since the publishing of your book changed? Are democracies currently resilient and strong enough to do this?

CARL MILLER: I am not part of the school that democracy is all about to end. I have never felt that. I have always felt that democracy is somewhat more resilient, however alive we have to be to various kinds of worrying forces chipping away at it.

I do think democracies and states are powerful enough. I have always felt that states are powerful enough. I remember back in the early days of the Web 2.0 debate and social media debate ministers of Parliament would ask me, “What can we possibly do to control companies the size of Facebook?”

I said: “You wield the power of a coercive state. States have armies. They have coercive taxation powers. They control courts. Companies are only companies, and in the great hierarchy of power they are nothing compared to a state.” I think that will be the case now. The state has the power to control these things, and I think ultimately it will.

The good news right now is also that the open economies and open societies have attracted the talent, the capital, the innovation, and the energy to mean that. It is in democracies where these models are being built, it is the democracies that control them, and in all indications they will remain in leadership positions.

ANJA KASPERSEN: Is there risk in what you are proposing that we are actually then fostering surveillance states as a means to counter the power that these companies hold?

CARL MILLER: There are risks in every response to every problem. The only way through this is by positively rearticulating what we think liberal democratic frameworks are in worlds greatly changed by things like AI and then going out there and bootstrapping and creating them. I definitely do not advocate for one moment that we start surrendering basic principles, including principles of balance of powers, principles of privacy, and principles of private lives to suddenly start confronting these new forms of power. No. We need to remember the things that make liberal democracies what they are and reimagine what they might look like.

ANJA KASPERSEN: Lastly, what instills hope and inspiration in you for the future with all your work on new power dynamics, technology, and its promises and perils?

CARL MILLER: If anything, it is looking somewhat in the past if I am honest. I remember at the end of the podcast—which I have already done and is about to be out—there is a moment when I reflect on the moment we are living through. I think this onrush of both thrilling potential and lurking threat probably felt very similar to people living through their own revolutions. I cannot think that this is anything unrecognizable to someone living through the Industrial Revolution, someone living through World War I, or someone living through the Agrarian Revolution, or a ton of other moments when the world has changed in quite convulsive ways, and yet we manage. There are growing pains for sure, often damage, and often tragedies, but we tend to come through these things okay as the human race. We tend to be able to manage them.

We are in the first months of the post-GPT age, and already look at the enormous amount of activity, all the amazing thinkers, thoughts, and passion that is being poured into the debate we have just had. We live in a world very, very different from the early days of social media. There are all kinds of very powerful institutions and very clever people now dealing with this problem, and all that makes me very optimistic. I think with all the things we see happening inevitably we are going to bring these things under some form of control. It will be a constantly moveable feast, we will constantly have to deal with new problems as they emerge, but I think we will get on top of this.

ANJA KASPERSEN: What better way to end this great conversation than on that positive note.

CARL MILLER: Thank you very much, Anja, and thanks, everyone, for listening. We have got through to the end. It has been fascinating. I always enjoy a good chat about power.

ANJA KASPERSEN: Carl, this has been an incredibly intriguing conversation. I extend my heartfelt gratitude for generously sharing your time, profound insights, and expertise.

To our listeners, thank you for tuning in, and a special thanks to the dedicated team at Carnegie Council for their efforts in hosting and producing this podcast. For the latest content on ethics and international affairs I encourage you to connect with us on social media @CarnegieCouncil. I am Anja Kaspersen, and I sincerely hope that we have proven ourselves worthy of your time. Thank you.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

JAN 23, 2024 Podcast

When Science Meets Power, with Geoff Mulgan

In this special episode, Senior Fellow Anja Kaspersen speaks with University College London's Professor Geoff Mulgan on the trends shaping technology's impact on society.