When Science Meets Power, with Geoff Mulgan

Jan 23, 2024 75 min listen

This special episode features Senior Fellow Anja Kaspersen in conversation with University College London's Professor Geoff Mulgan. They reflect on the year 2023, delve into trends shaping technology's impact on society, and discuss the critical interplay between science, governance, and power dynamics.

Mulgan, renowned for his work on technology's societal implications, shares insights from his varied career in policy, academia, and technology. They explore the evolving landscape of AI and its broader societal implications and the "billionaire problem," which underscores the urgent need for informed leadership and innovative institutional design in navigating these transformative times.

When Science Meets Power AIEI podcast link When Science Meets Power AIEI Apple podcast link

ANJA KASPERSEN: Welcome to our special episode with Geoff Mulgan, a renowned expert in social innovation, public policy, and technology, celebrated for his influential work on the implications of technology in society and governance. Geoff’s list of achievements is extensive, but we will touch on a few notable ones, with a full list available in the transcript.

He currently serves as professor of collective intelligence, public policy, and social innovation at University College London (UCL). His past roles include chief executive of Nesta, visiting professor at University College London and The University of Melbourne, and significant positions in UK politics spanning many years. Holding a doctorate in telecommunications, Geoff is also an accomplished author, known for his bold and, in my view, very important writings on a variety of subjects, including social innovation and ways of stimulating social and political imagination.

We are incredibly honored to have Geoff Mulgan with us today to reflect on the year 2023, talk about his latest book, and also to look ahead to 2024, a year likely to be as paradigm shifting as this one.

Welcome, Geoff. It is so nice to have you with us.

GEOFF MULGAN: Thank you very much, Anja. It is great to be with you all today.

ANJA KASPERSEN: Let’s dig into some questions to unpack a little bit your journey and that of your scholarship, Geoff. Through your extensive journey across various fields you have gained profound insights into the role of technology in our society. Could you share with us how this journey unfolded? What pivotal moments or influences, if I may, led you to this deep interest in the intersection of technology, society, and governance?

GEOFF MULGAN: I have had quite a messy career, which does not make a lot of sense. My first job was in city government. I then went to the Massachusetts Institute of Technology to learn about technology and hang out with the people who were then creating the Internet, and I guess ever since then I have been worried that the world of politics and policy does not really understand the world of technology and vice versa. This has led to 30 years of mistakes and miscommunications, which in many ways are only getting worse.

I had quite a long spell in UK government, running the strategy team there and policy for the prime minister, and that was another period when we were attempting to put in place new approaches to what was then called e-government and e-commerce, all these things which were prompted by the Internet, which suggested the need to radically remake our systems, our structures, and our ways of thinking. I actually wrote a book in the mid-1990s about that, trying to think through what the dilemmas would be of an Internet world where we were much more connected and where we would be aware of things like climate change and how our behaviors impact on others. I think I greatly underestimated many of the pathologies of the internet. I was probably too optimistic then.

In other roles I ran for quite a long time Nesta, which was the UK’s National Endowment for Science, Technology, and the Arts, slightly unusual in linking science, tech, and arts and the creative economy, so it was a funder for quite a long period of digital innovations, running investment funds, and also funds for commissioning AI in the public sector.

I have tried to oscillate, as it were, between sitting within bureaucracies—in many ways I am a boring bureaucrat; I like making systems, rules, processes, and laws—and another strand working in technology and at least trying to understand it, and now I am in the Engineering Department at UCL.

As probably a third leg, I have often been an activist. As a teenager I was a political activist—and trying to be also connected to the currents of progressive change around the world, which have been in tension with the bureaucracies and with the technologists.

ANJA KASPERSEN: Speaking about your background, you also trained as a Buddhist monk in Sri Lanka during your formative years. Would you say that that experience shaped your curiosity about what it means to be human, which is a theme that is an undercurrent in a lot of your writing?

GEOFF MULGAN: As a teenager I was a classic angry political activist, then I went to Sri Lanka and got to know an amazing Buddhist thinker called Nyanaponika Thera, who taught me a lot. I was very much a failed monk, but in a way his main message was, “You have to change yourself before you change the world, otherwise your pathologies will be realized in the world.”

I remain very interested in Buddhist thinking. I helped set up an interesting organization called Action for Happiness, which has the Dalai Lama as our patron, trying to promote in some ways very Buddhist ideas in everyday life.

One of the themes I have tried to grapple with in my latest book about science is wisdom. My great fear over the last 30 to 40 years is we have had an amazing explosion of technologies for cognition, for reasoning, for algorithms, and so on, but not much if any advance in our collective wisdom. This is obviously not a new insight. It has been talked about for 100 or 150 years, the problem of technology advancing without requisite wisdom, but perhaps it is even more of an issue now as our technologies become so immensely powerful.

ANJA KASPERSEN: Do you draw a distinction between intelligence and wisdom?

GEOFF MULGAN: Many people think of there being a continuum from grasping data, let’s say, through to knowledge, through to wisdom, which is not just an accumulation of cleverness, reasoning, or algorithmic intelligence. What is interesting about how all civilizations have thought about wisdom through the years is somewhat different from cleverness and narrow intelligence in that it usually certainly involves an ethical dimension, an ability to integrate many, many different ways of thinking and not just a single way of thinking, and attention to context and very acute attention to the human context, the historical context, and the cultural context.

We are surrounded by wonderful algorithms, but the one thing that they are very bad at doing is all of those different aspects of wisdom. We have definitely superintelligence of certain kinds but very little advance in terms of computational wisdom despite some efforts. Japan in particular had a big stream of work on computational wisdom, but it has not delivered much yet. My hope is that in the next ten or twenty years some of the enormous efforts going into large language models and the outer reaches of AI might slightly divert to ask, are these actually enhancing our capacity to be wise both individually and collectively?

ANJA KASPERSEN: To reframe what you said earlier, or are they simply enhancing the underlying some would call it conditioning in us as humans, but you call them “pathologies,” once we embed our own pathologies into algorithms?

GEOFF MULGAN: Certainly the business models of the Internet have greatly played on exploiting our pathologies, our biases, our predilections, and our compulsions, and this was something very underrated. As I mentioned earlier, I was involved with many of the people creating the Internet, the World Wide Web, and so on. Almost none of them had any sense of how platform technologies would be so directed in that way, to try to encourage often very compulsive behaviors, let alone to amplify misinformation, folly of all kinds, distortions, and lies.

We have created an extraordinary set of machineries which—no one intended them to do this—have often more amplified collective stupidity than collective wisdom. I think now is definitely a moment to take stock of that and also to see what was wrong in our mental models which led so much of the world—the computer scientists, Silicon Valley, and the investors—to get this wrong and not to understand the technologies they were bringing into the world.

ANJA KASPERSEN: Which is a great segue to delve into your latest book, which has the appropriate title, When Science Meets Power, which aptly captures, in my view, the essence of current global dynamics where new scientific capabilities are indeed transforming not just who holds power but the very nature of power itself. We actually did a separate podcast on power a few months ago, digging into this topic.

Geoff, if I may ask you, you touch on a few issues that I know are reflected in your book as well throughout this conversation, but could you guide us to the core premise of your book and also share some key excerpts from it that you think our listeners might find enlightening and help them think through some of these big issues that we are faced with?

GEOFF MULGAN: First, the book is an attempt to describe or diagnose what I see as the problematic relationship between science and power right now. Almost every week you see a new example of this. There are the obvious examples, which include the anti-science drives of people like Ron DeSantis in the United States, but there are equivalents all over the world, who reject the very premise of scientific discovery and scientific knowledge. That is not going to go away and could get much worse in the next few years, but in some ways that is the relatively easy case. I think the harder cases are where well-intentioned scientists and well-intentioned politicians clash or fail to understand each other.

Here in the United Kingdom in the last few months we have had a fascinating inquiry into COVID-19 with all the political leaders and the scientists having to give evidence, and both sides come out pretty badly. Politicians like Boris Johnson simply did not understand a lot of what the science was saying, and they took refuge under the slogan, “We will only follow the science,” yet of course the science did not really tell you what to do. It might tell you that there was an epidemic and that infection rates might become exponential and that you had to therefore act decisively to lock down or shut your borders, but beyond that science could not tell politicians how to weigh up the interests of the young against the old, health against the economy, and what might be acceptable infringements on civil liberties. This hiding behind the language of following the science rather quickly fell apart.

The scientists were very interesting in their evidence. They are very smart people, but often they realized they were almost becoming executives. They were making decisions, but they did not have the training or preparation for making these very complex decisions. It was not clear what their accountability was. In theory they were just advisors; the politicians were making all the decisions. Also, often the scientists had unbalanced disciplinary backgrounds. Often they knew a lot about biomedical science but very little bit about mental health issues, which very quickly came to the fore.

In the United States the congressional interrogations of Anthony Fauci are very interesting in the same way. Some of them are very much anti-science people trying to undermine a very senior scientist, but his own responses about the testing in Wuhan, about the U.S. National Institutes of Health role in funding what many people see as incredibly dangerous research on pathogens, gain-of-function research, and so on, to many people looks not all that wise. I start the book with the fact that in many big cities around the world there are biosafety level 4 (BSL-4) labs, labs doing very, very dangerous research on bio-organisms.

ANJA KASPERSEN: Just to inform our listeners, there are four levels of biosecurity. Maybe you want to say something about that.

GEOFF MULGAN: This is the most dangerous level, for which you need very strict rules and regulations. They appear in Hollywood movies with people dressed in hazard suits of all kinds, but these are often located in big cities, and it is possible—it now looks clear that there probably was not a leak from the Wuhan lab—that there could have been a leak from the Wuhan lab, which is one of these facilities, which started the pandemic.

So here was science often doing very intelligent and very important work but not with a healthy relationship to the public or with democracy and not very good at explaining what it is doing and why it is doing it. The other issue, of course, in 2023 was artificial intelligence, which is a product of extraordinary science over the last 50 or 60 years.

The politicians of the world struggle to make sense of what their role should be. Should they be banning things, regulating it, investing huge amounts of public money to get competitive advantage, or making their military ever more automated? All of these are choices, but none of them were able to deliberate on it very intelligently.

This leads to the core argument that I make in the book, that we collectively—as humans, as a society—do need to shape science and technology. It is an enormous force for good and an enormous force for harm. That has been obvious ever since the nuclear bombs of the 1940s and is even more true of AI, quantum, new genetics, or synthetic biology, a whole array of fields where science is both immensely potentially powerful for good and immensely dangerous.

Our only collective means of steering and guiding science is ultimately through politics and governments, which pass laws, set rules, allow some things and ban other things, fund some things and defund others, but our politics is more and more incapable of playing this role. I call this the “science-politics paradox,” that only politics can guide science but politics in its current forms is woefully ill-suited to guiding science.

I argue that science on its own is not very well placed to self-govern, although many scientists in the past believed in a dream of self-government of science, which in many ways made a lot of sense about a hundred years ago but makes ever less sense as science becomes so interwoven into daily life and so powerful in its effects on our air, our health, and even our brains. This is why we need a rethink of almost all the places in which power and science intersect, from advice to parliaments to laws and regulations on the national level to the global, and in my book I try to sketch out what some of those changes might look like.

ANJA KASPERSEN: You penned a thought-provoking article entitled, “Can democracies afford incompetent leaders? The case for training politicians,” which I assume builds on some of the core tenets of your book as well that you just spoke about. The article addresses what you call the “gaps between leaders’ capabilities and the need of our times.” The two cases, both AI and also how do we deal with pandemics or how do we build stronger resilience in societies are definitely two core parts of that.

In your view, do we have leaders capable of navigating in these challenging times? You alluded already that we may not have that, but what is the article about, and how does it build on what you just spoke about from your book?

GEOFF MULGAN: I think we often have pretty able leaders. I think they get a bit of bad press unfairly. It is easy to knock them. This particular piece was mainly prompted by the fact that I taught in many government colleges around the world from China and Singapore to the United States, Canada, Australia, and Europe, but the weird thing about most of those is that they do try to train the civil servants to better understand things like technology, science, and what is happening to international law and all this sort of stuff, but the politicians who are ultimately making the decisions usually get no training at all. It is about the only serious profession which is treated as completely amateur. You can become a president or a prime minister with literally not an hour of training or anyone even asking you whether you are qualified to do the job.

That might have been okay at some point in our history, when the decisions politicians had to make could be made with intuition, experience, and so on, but now they are more and more having to deal with issues which are highly complex and highly scientific. In the last three or four years it is pandemics, it is AI, it is, what do you do about a mental health crisis or climate change adaptation? These are the everyday issues which do require at least a baseline of skills and knowledge.

In China ministers, governors, and others do get fairly serious training at the Party schools and other academies, and they have to do residential training every year. The United States has some infrastructure for training politicians in places like the Kennedy School, and Bloomberg runs a program to train mayors, which started about five years ago. Australia actually now has a new academy for politicians, a very good one, called the McKinnon Institute. But in many countries there is essentially nothing. That means that even if you have a pretty able politician who wants to do the right thing, they will founder if they’re faced with a highly complex challenge.

I think we have a bit of the same problem with bureaucrats, although as I say there are many colleges and master of public administration programs in universities and so on. Most civil servants tend to have a background in law and economics. Not so many actually have backgrounds in science, data, computer science, or the things which are increasingly important to the decisions they are making. Again, that is bound to mean a problem with capability for government as a whole.

One of the things I argue in the book as well is that we need to seriously attend to the skills of our decision makers. There is a different kind of curriculum they need compared to a generation or two ago. We also actually need a bit of a curriculum for the public, who have to at least make decisions about decisions even if they are not directly making laws and so on.

When I was at university I did a course called philosophy, politics, and economics (PPE) at Oxford University, which was set up a hundred years ago. That was thought to be the forward-looking way to train people to run governments and to help them prepare to do macroeconomics and stuff like that, and that was fairly progressive a hundred years ago.

It is still the course which many leaders in Britain have done, but it is completely inadequate for the actual things coming across their desks, which, as I say, are far more likely to involve a lot of complex science, data, statistics, and an assessment of complex and often ambiguous evidence. I think every country needs an academy for politicians at a minimum and an overhaul of the training for public servants, national government, city government, and elsewhere, whose curriculums they have gone through are often very out of date compared to the tasks they are having to do.

ANJA KASPERSEN: It is interesting because this also tells a story about outlook, where you are in the world. I also did a course as a student called PPE, but the E was not economy, it was ethics. I think that is often the missing component of it, and also that we are increasingly I would say making ethics into something it is not. We talk about “ethical AI” or “ethical technology”—don’t even get me started on the use of the word “responsible”—but ethical AI or ethical anything can be ethical but still very harmful. I think that is the tension point that we need to grapple with.

Ethics to Carnegie Council and the work that we do—and we will get to that—and have also been doing together has very much been trying to promote a new line of ethics, a new theory of ethics, that pushes the focus onto how do we deal with those tension points, how do we grapple with the tradeoffs that inevitably are going to follow any decision that allows you to use and develop these technologies that hold so much power and can impact so many people at scale, and then using words like “ethics,” which often just becomes a sort of nice way of talking about economics, economies of scale, right?

Do you see this a lot in your work, “ethics” being used in rather unscrupulous and rather cynical ways?

GEOFF MULGAN: “Ethics” risks being a spray-on aerosol. Early in 2024 I will be publishing a review that I have done, which is quite a boring review, on what we have just been talking about. I was asked by a couple of institutions in the United States to look at current training for both civil servants and politicians around the world: What was in the curriculum, and was it fit for purpose? I will be publishing this survey on universities and civil service colleges and so on.

I also looked at what governments said they wanted, the competencies they thought they most needed, and ethics comes very high on that list. They claim they want civil servants who are able to reason ethically, to think about issues of integrity, and I hope what they mean by that is not just an ability to spray it on but to think about the boundary cases. Ethics always comes alive when you think about the difficult cases, the ones where it is not obvious what the right thing to do is, and you learn a way of reasoning with others which is ethical and accountable. Yet this is largely missing from the current training provision for both politicians and civil servants.

I have a little bit of an issue—which I talk about in the book—about the language of ethics. One can take different views on this, but I go back to Aristotle who, more than 2,000 years ago, did actually suggest a difference between politics and ethics. For him at least ethics was mainly about the individual—what is the good life, the right decisions for the individual—and politics is the collective version of that—how does the society decide what is good and right for itself as a community, as a polis?

I think many of the issues which are labeled as ethical are actually political in that sense. They are really about what we collectively decide to do about risks, about nuclear power, or about the costs and benefits and tradeoffs of carbon adaptation. I would prefer the various bodies with ethics in their title perhaps to add in “politics” as well to recognize that these are ultimately small-p and big-P political judgments which we all have to make, but that may be a slightly pedantic argument.

ANJA KASPERSEN: It is an interesting one because Aristotle, of course, was known as one of the early thinkers around what later became “virtue ethics”—to what extent does your character speak to the ethical consideration that is being done? One could make that argument both in your book and in the article, where you are asking, “Do we have competent enough leaders to navigate these very perilous times?” Is that an issue of character or is it an issue of instituting an ethics that is much bigger than characters, that is actually a result of a sustained, systemic effort as opposed to being left to the individual, which, once it is left to the individual level, can be exploited? And it is being exploited.

GEOFF MULGAN: You are absolutely right there. In a way we have to rely sometimes on delegating to leaders who we hope have sufficiently virtuous characters, they do the right thing under pressure, and that they make good decisions and don’t just fake it, although there are lots of incentives to just faking it. That is in a sense what we look for.

The other thing, though, which I think is a great dilemma of the years ahead, is whether ethics can be almost turned into something like an algorithm. Many people working in AI think of ethics as just a set of rules or principles which you could in theory put into a computer that will then spit out what is the ethical answer, whereas certainly all of my experience of ethics is fuzzier than that, more ambiguous, and, as I said before, more contextual. What we really want are leaders who can explain how they reached a judgment combining multiple factors—some of which may come from data and science, some from ethical reasoning, and some from politics—and then be held to account for that decision. I think that public discourse about ethics is important.

In the book I talk about what I think is a good but unusual example of that, which may be of interest. More than 30 years ago, when human fertilization became a big issue—cloning, in vitro fertilization, and so on—in the United Kingdom, a huge public debate was held about what the right way forward should be: What should be allowed? What should be licensed? What experiments should be permitted?

Parliament had a bigger debate on that than anything it has had since in relation to science, and it led to the creation of what was called the Human Fertilisation and Embryology Authority (HFEA), part of whose duty was to decide—it was a very powerful regulator—what was lawful and what could be experimented with. But it was also set up with a remit to explain, to communicate its thoughts, its reasonings, in real time to the public, and it has basically worked very well in that it has allowed constant innovation and scientific experimentation and keeping public confidence and public legitimacy.

For me what is fascinating is why did we not have anything like that for the Internet? Why have we not had anything like that for AI? All these other fields of science were completely lacking in the sense of the creation of an institution that was simultaneously pro-science and pro-innovation but also very deeply ethical and very deeply accountable to the public for the often quite subtle judgments it was making about what should be allowed. Other countries have had much more rigid answers to those questions like banning stem cell research and so on. The HFEA is an unusual example of relative success in a very controversial space.

ANJA KASPERSEN: We actually had a podcast some time ago on what I think you are really pointing to, which is this desperate need not just to bring back but to refine what we call “public space ethics.” We had a podcast with an Athenian, a scholar in this field, some months back, and we talked about this notion of what does civil ethics or civic dialogue ethics mean? What does it mean to have an ethical public space where you discuss these issues? You mentioned Aristotle earlier, and these were definitely core to Athenian society. The Athenian society had many things that were less desirable to replicate in modern times, especially in terms of equality, etc., but there was definitely a strong focus on having that public discourse.

But the public discourse is increasingly—and I know this is a field of your expertise—being co-opted and sometimes even swallowed whole by the dominant narratives that have nothing to do with the truths or the facts of the situation, but who gets to define what is important and who gets to decide over who makes decisions in this space, particularly related to what you were mentioning before, how we engage with edge cases as technology is being deployed?

GEOFF MULGAN: The power to frame the argument is in some ways the most powerful power. Everything follows from that.

I think there is also a challenge for the world of science itself, which I hope with this book I am provoking at least a bit of that discussion. As I said a bit before, the scientists often think they should not be accountable to the public and that the realm of science should be autonomous. There were all sorts of very good reasons in the past why you wanted science to be autonomous from politics, autonomous from the public, self-referential, able to explore, speculate, and go to difficult places. This was the foundation of the science community. For example, the Royal Society in London, the first scientific institution, set up in the mid-17th century, was founded on that principle of autonomy.

Increasingly, though, I think that pure autonomy does not work, partly for the reasons you have said. There has to be a dialogue with the public about the ethics, about the choices, and about the broader issues, which attend to not just the scientific logic but also the ethics and what the public thinks is reasonable.

I think there is a deeper political thing going on as well. We have seen in the last five or ten years an incredible collapse of confidence in science among some groups. In the United States Republican voters’ faith in science I think has halved roughly in the last five or six years. Here in the United Kingdom there is very worrying polling evidence which shows that a majority of people do not think research and development benefits them at all, which is quite surprising. Most people have no knowledge of what science happens in the area where they live.

In some ways that is not surprising because there is no communication with people about what science has done in their area and what the choices are. That goes back to a fundamental strategic question of the next ten or twenty years, which is: What priorities should govern where the brainpower of science goes?

In the past there were basically three dominant answers to that. One was the state, which got scientists to make weapons, missiles, tanks, and so on. There was another view that it should be business, that you should direct science to drive gross domestic product growth. Then there were the scientists themselves, who thought they should make decisions through peer review and autonomy.

But the idea of public values/public preferences influencing spending on science has always been quite weak, and if you look around the world—I was part of a project last year done with the United Nations where we looked at global spending on science and technology and whether it aligned with the Sustainable Development Goals (SDGs). The SDGs are what the world have said are its biggest priorities, and perhaps not surprisingly we found an enormous mismatch. Where the money goes, where the brainpower goes, does not align very well with what the public thinks are the great priorities of the next ten or twenty years.

One of the things I try to look at in the book is what might institutions look like that would help us nudge or at least slightly better align this enormous capacity we now have for invention, for science, for exploration, and so on, with the issues of things like food, malnutrition, mental health, decarbonization, lack of water, all the things which the world desperately needs faster innovation on, whereas at the moment if you look at where the money goes—in the case of digital tech a huge amount is spent on click-through advertising; going back to what you were saying earlier, there are ways of manipulating people’s behavior, but very, very little on how to stop disinformation, how to protect democracy, how to reinforce childhood, and protect childhood from manipulation.

With AI I think the same is happening now. There is enormous spending on things like YouTube recommendation engines or what have you but very little spending even now on what AI could be doing to improve, for example, jobs. There is a project I am working on at the moment on how people get a new job in an age of AI or education, welfare, or these other fields.

One final example of that: Much of the biggest investment next year in AI will be into essentially personal assistants, copilots, personal intelligences (PIs) of different kinds, which will try to get to know you, your preferences, your behaviors, and then shape guidance or advice to you about what you should do.

To my mind the biggest potential of these lies in care and welfare systems, people, usually old people perhaps with multiple conditions, complex lives, needing a lot of help and care, but almost no investment is going to adapt these AI technologies to where the greatest need is. They are being mainly tailored to wealthy executives in the Global North and the West or to decisions which link to consumption and spending, not to fields like care.

I think we have a massive imbalance between essentially the world’s needs and the capabilities of science and technology, which have not aligned with those needs.

ANJA KASPERSEN: Given your many—because you held, as you said yourself, quite a few—significant roles in politics, advising prime ministers, directing work on policy, on innovation, and much of what you are referring to right now requires that—and you are right in saying this—we are not making investment in what I would call the “cognitive resilience” of people to engage with tools and technologies that would fundamentally alter how you define your own role in that relationship. This is not just a conundrum. It is an ethical issue that we need to grapple with soon.

You also mention another interesting point, and I know you have done some work on this and I guess you had to also come to terms with what it means when we are embedding technologies into our children’s lives the way we are. You spoke earlier about this new kind of “addiction economy” that we are maybe not intentionally but unintentionally creating. This particularly revolves around children’s engagement with technology.

We tend to try to regulate technology at the use case level, and we have all seen the discussions in terms of the European AI Act, etc., which is very much focused on use cases, but we all know that unless you actually grapple with the very design of these technologies, holding companies and leaders accountable for the intention that gets set out when these tools are being devised, developed, and launched, you are not going to be able to grapple with some of these undesirable ethical outcomes.

This is particularly relevant for those under 18 who are not only being subjected to it but are likely to be held accountable to it because their digital footprint is being mapped in ways that have never seen before in history. So we are creating a generation that has not only become digital savants in some ways, but they are also becoming subject to the algorithm in ways that previous generations never have been subjected to without having the regulatory frameworks in place.

Thinking about that design phase, as a professor of innovation as well and social innovation, what are your thoughts on this? Again, back to your political roles, what do you see as the underlying trends of why are we not getting this right?

GEOFF MULGAN: I agree with all that you have said there. I have just finished teaching the first term of a new degree which I helped create, which in a way is trying to be an answer to that question. I wish we could get our civil servants and politicians sitting through these undergraduate classes, because the degree is part of engineering, but it is trying to get the students to understand that technologies do not just come into the world with the world passively adopting them. There always is an interaction, a shaping, and an argument. We look at waste, water, and all sorts of things, but the car in some ways is the simplest example.

One hundred and fifty years ago cars started appearing on the streets of cities in Germany and elsewhere, and the story then of how the world in a sense came to cope with the car to me is a fascinating one and quite different from what people expected at the beginning. We learned that you had to have speed limits and road markings. You had to have driving tests of all kinds. You had to bring in new rules on drink driving. You then brought in new rules and standards for cars and emissions and catalytic converters.

Then there was a shift, often driven by concerns for childhood, with speed bumps to radically slow cars down in some neighborhoods. Most recently there was a whole push to stop cars idling outside schools because that again is bad for the kids in terms of air quality. A hundred and one different rules and norms have come into being to help us cope with the technology of the car, some very formal—some of them are laws and regulations—but some are just norms.

I think with the Internet, AI, and so on, what we will end up with is equally complex and multiple—lots of rules, lots of laws, and lots of norms—to try to get the best out of the technology and avoid the worst, but what is extraordinary is how little that debate happened in the two or three decades when the Internet became so central to everybody’s lives, and on AI the AI scientists I think have done an amazing but very unhealthy job of blocking exactly that kind of necessary debate about the details of how we cope, how we shape, and how we adapt.

Yet that is exactly where we have to go in the next ten or twenty years, and it will end up, as with the car, with lots of different institutions which do regulating and lots of different logics in there because the car is so central to our society. It is not going to be a single notion of safety, for example, which has determined what is done with the car any more than it will with AI.

I am still by how many highly educated people still essentially think of science and technology in almost a linear way, as things which just come into society and we then use them rather than realizing there is always a to and fro, an argument, a jostling between the push of the technology and the pull of the public and then usually political and social processes of shaping, which hopefully help us get us to better outcomes in the end.

ANJA KASPERSEN: I am sure that you like me find yourself in many conversations that fundamentally end up with two questions: To what extent is technology shaping society, and to what extent is society shaping technology? There is always an element of both, but I think increasingly we are seeing that people are accepting the narrative that technology will be shaping our society.

I find myself curious about why that particular narrative sits with people because I think what we have learned over history is that it is our societal needs that get the funding—you mentioned that before, the power for government is to prioritize and is to fund something or defund something—why that is increasingly getting funded? So these tech-deterministic narratives have really gained a strong foothold and are in some ways disempowering people or disincentivizing that very same public discourse that you were alluding to earlier.

GEOFF MULGAN: I think historically we were unlucky that these immensely powerful digital technologies emerged into the world at the same time as an ideological swing against public action and against government, captured by Ronald Reagan’s famous saying: “The most frightening words in the English language are, ‘I’m from the government. I’m here to help.’” There is a whole belief in many countries that actually government would get it wrong, would damage the innovation, damage the technology, and therefore have just said, better just to laissez-faire, let it be, let the tech people and let the market decide. That was very influential in the 1990s, the 2000s, and even the 2010s in many countries, and I think it is probably the explanation of why there was such a failure to create the necessary institutions and rules needed.

Some parts of the world are exceptions to that and were less affected by those ideological shifts. China obviously has moved very, very fast now on AI curriculation with its cyberspace authority and many other things, perhaps too far for many people. India in some ways has gone further in I think very creative public innovation around digital public infrastructures of all kinds in perhaps a more confident sense that the state has a capacity to shape and use technologies in ways which would not emerge from a purely market logic, but certainly in North America and much of Europe we have had this unhappy coincidence of an anti-government, anti-public mood for a generation or so at exactly the time that perhaps we most needed public institutions to help guide and shape these incredibly powerful technologies.

ANJA KASPERSEN: Where do you see, although unfinished—as with anything that has to do with European processes, only when you have the final draft in its final form can you actually trust what is in it—the latest breakthrough in actually agreeing on something yet to materialize, the EU AI Act? Where does that place itself in this landscape that you just referred to?

GEOFF MULGAN: It is good that it is happening. It has taken quite a few years. I think it probably should have happened ten years ago, but better late than never.

I think the world is now moving very fast. Joe Biden had an executive directive not long ago which was to my mind quite rich and complex. It had many different fronts it talked about, so it was not falling into the trap of my prime minister, Rishi Sunak, who has talked about AI safety as being the only issue which matters.

The EU Act I think will have a lot of challenges in its implementation. It has already had the huge challenge of large language models that were emerging in the midst of its deliberations and calling into question some of its principles.

In a way what I hope we will see, and this is perhaps a tricky thing for governance of any kind in this space, is when technologies are moving very, very fast you need a different model of regulation than when they are moving quite slowly. The classic model, which became the dominant theory in the 1990s and beyond, was that you set up regulators who should give regulatory certainty to companies, a few simple principles, not to intervene very much, and maybe revise them every 20 years or so.

In the case of AI but also of many other technologies—like driverless cars, drones, and some genomics; there is so much uncertainty about what will be on the market even in two years let alone five years—I have for quite a while argued that we need a different approach, which I called “anticipatory regulation,” where you create quite powerful regulators who can give temporary, contingent licenses, and can say, “You can do this but under these conditions, and we will track the data of what actually happens and then review it in a year’s time,” or using experimental methods to work with the companies or with the innovators to test ideas out in reality or in test beds or in simulations.

It is a very different almost ethos of regulation to the classic lawyer-driven models of the late 20th century, but I think in many fields that is where we have to go, and I think the EU Act will have to end up with quite empowered regulators who are not following over-specified laws about what they can do but are rather told about the outcomes they have to be achieving and given a fair amount of latitude to determine what rules in what context will best deliver those outcomes because the legislators will never be able to predict the actual conditions and the actual technologies they will be dealing with, but I think it will take us a few more years to get there.

ANJA KASPERSEN: Going a bit more global, we did some work together earlier this year, where we launched "A Framework for the International Governance of AI." And to our listeners all of this can be found on our website both in audio and written versions should you want to learn something more and also share your thoughts with us. You refer to some of this, like what are the modalities that are required to get that international governance model to work.

Can you elaborate a little bit more on what you have seen of what type of governance models work, what can we eliminate, what shouldn’t we be looking out, and what are the things that can help us move this ship forward? As you said, there are many, many different layers. Now working for an organization that is also quite instrumental in setting standards, I often end up in conversations where the panacea to any conversation that does not have a clear outcome is to push it downward and say, “Let the standards people deal with it,” when it comes to implementation. How do we get that equation right, get the policy right, get the regulatory frameworks right, and then make sure that we have standards that are appropriate to address all those issues, including all those edge issues that you spoke about earlier?

GEOFF MULGAN: It is very easy to be pessimistic in 2023 and 2024 of a world where we have war in Ukraine, we have intensifying tension between China and the United States, and an ever more multipolar world that struggles to get its act together on common problems and appears to be becoming more nationalistic and not less, but I think there are two positives to build on. I worked years ago on telecom standards, and in many ways standards is the extraordinary success story of the world in the last 50 years, not just the standards of mobile phones and the internet but also things like bar codes and many other things.

ANJA KASPERSEN: Wi-fi.

GEOFF MULGAN: Exactly. Our lives benefit so much from the boring work done behind the scenes on technical standards.

The other at the global level is the spread of bodies which are not about deploying troops or not even about deploying money, but they are essentially organizations to orchestrate knowledge. The Intergovernmental Panel on Climate Change (IPCC) does it in relation to climate change, giving the world a picture of what might go wrong. The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) does it for biodiversity. We have a whole host of examples which I list in some detail in my book of these organizations essentially trying to orchestrate the commons of global knowledge about issues.

In relation to AI one of the things I was arguing is that we may at some point need to get to true global governance, some capacity to set standards, to regulate, and perhaps to penalize countries or companies which do something crazy, but a precondition for anything else is at least some shared knowledge about what is happening, what is going wrong, and what might happen in the next two to four years.

We looked at an analogy of the IPCC, though AI is quite different, but at least a minimum step I think for 2024 is to create something which brings together the scientists, engineers, and others to document the key developments, to have a registry of the most important algorithms with the biggest impact, a registry of critical incidents, dangerous incidents, in the way the airlines do—that is what has helped air travel become so much safer, the logging and sharing of problems and near problems—some scenario work, again of the kind the IPCC does, about what might happen in two, four, or six years, and then some more detailed attention to specific issues like jobs or education.

It seems to me that this should not be too hard for the world to do. We have the knowledge, we have amazing science, but we are just missing some of these crucial institutions to bring it together and make it usable.

A final point, one of the tasks I think it could be doing: There are 200 or so governments in the world. Only a handful of them have any capacity in AI, any deep capacity to understand what is going on. They need someone to be providing the draft laws, the draft regulations, and the things they can be adapting to themselves. The Organisation for Economic Cooperation and Development has done a little bit of that, but essentially it is an empty space even though AI again is shaping our lives every minute of the day right now. This is not a future problem. It is a present problem.

Perhaps the G20 next year with Brazil in the lead might attend to this. Perhaps we will get some momentum. Perhaps we will get the funding. It does not need very much money, but funding is necessary to create a new global entity as a minimal condition for getting a grip on an immensely powerful technology.

ANJA KASPERSEN: As someone who has made a career out of social innovation and public sector innovation, do you think there is an element that we conflated innovation with what is happening in the AI field, which is actually proving to be unproductive to putting in place that level of governance that you just spoke about?

GEOFF MULGAN: There is a common phrase used in Europe called the “STI trap,” which is to only think of innovation in terms of hardware, algorithms, or software. With almost anything serious in the world that you want to improve there will be a combination, yes, of technologies and staff and mobile phones but also of innovation business models, innovation in how society organizes itself, and innovation in how we live our lives in our homes, and it often leads to a very distorted view of change, this focus on hardware as the only innovation that matters.

This is very evident in relation to decarbonization, which is the great challenge of the world right now, and I do a lot of work with governments on that. They have long set up investment funds for new battery technologies or solar or electric cars, and so on, which is great, but it has become ever more clear that that is only part of the story. For example, I published just a few days ago work done in the United Kingdom with some of the governments here on home energy change, how you get much bigger takeup of heat pumps and serious retrofitting of homes.

The technology is sort of there and the economics is sort of there. What is missing is aligning that with psychology and how humans actually work, and nearly all the crucial innovations we need are not actually engineering innovations but innovations in how we help people change their lives and deal with the hassle of changing your house to a new energy source, switching your radiators, getting rid of your boiler, or emptying out your loft so you can insulate the roof.

These are very human issues, but they turn out to be the big barriers to progress every bit as much as the purely technology and engineering issues, and I think that applies across the board. We need a both/and way of thinking about these things, technology plus humans, otherwise we waste a lot of money and don’t achieve the outcomes we want.

ANJA KASPERSEN: I want to circle back to one of your articles, “The billionaire problem” I am hoping you will answer that question.

Before I go into the article, some of the things you told me about decarbonization and how do we get AI governance right are issues of investments because we know we have some of the decarbonization technologies available to us, but they need serious investment to be able to be scaled to a level where we can actually do what is required at this point in time, and instead we are looking at maybe less desirable solutions in that space, and the same thing with AI. We pretty much know what needs to be done, but it becomes an issue of incentives and where the power lies.

In the article, “The billionaire problem,” you actually share some rather mind-boggling statistics. You speak about the staggering global wealth gap evidenced by statistics like—I am quoting your article—“The richest 1 percent taking about half of all new wealth,” and this is not just wealth that has been generated through the last three years, when COVID-19 forced us into more digital patterns. You write about this over the last decade. In the two years to December 2021 it was two-thirds, which means that this has left the world’s billionaires worth some $13 trillion, highlighting an extraordinary concentration of wealth.

Of course I am mentioning this too because this podcast spins out of the AI & Equality Initiative that is based at Carnegie Council, where issues such as where technology exacerbates power and wealth inequalities is very important, but I would love you to talk and answer the question, do we have a billionaire problem, and to what extent does this privatization of wealth also influence our ability to come up with those global collective solutions in your view?

GEOFF MULGAN: This is where I think a little grounding in mathematics is quite helpful, to look at the numbers as you are doing. I think we definitely do have a billionaire problem or rather a series of billionaire problems. With this extraordinary further concentration of wealth in the hands of a very few people, mainly men, but also the concentration of billionaire power over the media with a billionaire owning Twitter/X or many of the dominant media in TV—Mark Zuckerberg and so on—we have in a sense handed our public communication fields to a small number of incredibly wealthy men. The result of this is actually a real problem with allocation of money because it has happened at the same time the governments are short of cash after COVID-19 and the long financial crisis.

If you want one example of the absurdity of it, it was the fund announced at the Conference of the Parties 28 in Dubai in its first week, which was good. It was a new fund to pay for some of the damage done to countries suffering from climate change which had not contributed to it. The sum in that fund was, if I remember right, $550 million, which sounds like a big number until you think that is about one footballer, a fraction of the wealth of a single billionaire because many of them are worth tens or even hundreds of billions.

I think much more broadly we are not getting the money to where it is needed. It is being hoarded by the billionaires, who become addicted to money which they do not need. Two hundred and fifty billionaires signed up to a giving pledge—and to their credit Bill Gates and Warren Buffet said, commit to giving away half of your wealth, either in your lifetime or in your will—but there is literally no data at all assessing that any of them even who have signed up to this have actually done so.

I think we have large group of people sitting on wealth, hoarding it, and it is not going to where the priority needs are, whether for food, education, health, or climate action, and because the billionaires often fund politics and are often key funders, not just of the right-wing parties but increasingly of center-left parties as well, they often do not dare to raise the issue of serious taxation of billionaires.

That is just beginning to change with some action in the European Union. The Minimum Corporate Tax was brought in a couple of years ago by the administration playing a crucial role in a 15 percent global tax, and there may be debates at next year’s G20 on some equivalence for billionaire wealth. I hope that happens. Some of the best economists, people like not just Thomas Piketty and Joe Stiglitz but Gabriel Zucman, are working on this, using their brains to think about how we deal with this imbalance.

We have allowed a horrible series of trends to essentially divert money away from its most important uses into buying ten to twenty homes or ten to twenty yachts for billionaires, who do not need them. They do not even get much pleasure out of their accumulation of wealth. I have talked to quite a few, and the prompt for that article was a conversation with two I cannot name, both worth tens of billions, and we agreed they could probably give away 99 percent of their wealth with no effect on their standard of living.

ANJA KASPERSEN: If we stay with the math for just a little bit, the AI economy has been valued by—let’s face it, also companies that might benefit from the economy itself—big professional services companies have been estimated to be worth about $150 trillion by 2025. That is the AI-generated economy in itself. Adding that to the billionaires of the world amounting to $13 trillion of the current global wealth, there is a massive gap here.

Of course that wealth is in large part invested into the same digital infrastructure and the same digital platforms that would generate that new AI wealth, equivalent to about $150 trillion, by 2025. When you talk about science meeting power, there is a lot of power in that $150 trillion price point as well. How do you see that impact on the future trends and the trajectory into 2024?

GEOFF MULGAN: Again, I think we have to use politics to come to better deals. This is what happened at the end of the 19th century, when huge wealth accumulated in the hands of people like Rockefeller, J. P. Morgan, and others, and there was then a swing toward believing their companies had to be broken up, the anti-trust movement, that taxes had to be higher. By the 1950s in the United States there was a 90 percent marginal tax for the rich, and a whole series of shifts were needed to deal with what was then thought to be a very malign concentration of power. I hope a similar shift will happen in the next ten or twenty years.

In a way the deal that happened in the 20th century was not so dissimilar. It said: “Let’s not actually slow down the technology. Implementation of advanced technologies raises productivity and can improve the standard of living of the whole of society, but we need to ensure the losers get compensation and the winners don’t get ridiculous gains.” We will need exactly the same as AI transforms labor markets.

Just one concrete example of that: I am working at the moment with a group of about ten governments, mainly coordinated by Bangladesh, where we have been looking at how to use data and other tools to understand the potential dynamics of change in jobs markets for people working in textiles, furniture, leather, and so on. How can you provide new tools to help people navigate to the skills which will be needed in ten years’ time to the new jobs which may be created and not to have their lives ruined by AI?

It is a kind of obvious project to be doing, and as I say there are a number of countries in mainly Africa who are collaborating with us, but we have not yet got a single billionaire willing to give even a tiny amount of money to help with that kind of essentially social deal, which says, “Yes, let’s all get the benefits of the tech, and you can still be pretty rich.”

There needs to be some reorientation of that wealth to help the people whose lives will otherwise be ruined so that they can navigate to a better future in that world in the sense that the dystopian prospect for 2030 is that there will be even more concentration of wealth in the hands of these billionaires and immiserization of the lives of tens if not hundreds of millions all over the world as their jobs are destroyed, their livelihoods are destroyed, and issues like climate change carry on unabated.

ANJA KASPERSEN: You have reflected a few times already on what you see as trends. They are very much described in your book, of course, what got us to here, and what the big issues were in 2023 that got us to where we are now.

Looking ahead to 2024 and even to 2025, what trends do you foresee as being most impactful, and what should we be particularly mindful of as we navigate the future

GEOFF MULGAN: I think we need to stay calm because I think the next year or two could be quite difficult. I am fairly pessimistic in the short run but quite optimistic in the longer run.

Next year will see a lot of elections.

ANJA KASPERSEN: Forty altogether.

GEOFF MULGAN: But I do not think most of those will grapple with these issues in a way which is particularly wise. I think it is very important for people who work in this space not to get demoralized, not to get panicked, and not to be hysterical. I tend to have faith that the world ultimately deals with its cosmic imbalances, if you like, and will in the end get to some better solutions around these things, which has often happened in the last 200 years. That will require people to take a longer view.

I often give the example of the United Nations, which ten years before it was set up was completely impossible, inconceivable, and a utopian fantasy, and then a few years later it just became a reality, obvious, and commonsensical. Many things are of that nature, but they require people to carry on doing the design work even in dark days and even when conditions look unpropitious.

I think there will be hubris. My memory of Greek myth, which may be wrong, is that hubris nearly always involves a punishment. Hubris does not get away with it. It is just a step toward a restoration of cosmic balance.

I do not think the huge imbalances of the present are very sustainable. I point out in my piece on billionaires that history gives quite a lot of warnings to the hubristic AI billionaires of the present. A fantastic book last year came out in France by Eric Vuillard called The War of the Poor, which describes in detail Germany in the 16th century, when the poor essentially rose up against the rich and slaughtered them in large numbers. The same happened in France, of course, in the late 18th century and Russia in the early 20th century. There have been many occasions when regimes, which were all hubristic, absolutely convinced they were going to last forever, and absolutely convinced of their divine right to rule and be rich and so on, ended up dead. I hope we can avoid those kinds of brutal corrections, but history does not go in straight lines. History has a dialectical quality to it.

I think the real question in the next ten to twenty years is whether we can do those corrections in a sane and balanced way with reasonable compromises or whether it will be much rougher and much more brutal.

ANJA KASPERSEN: To the point on hubris, it actually links back to when you were talking about Aristotle and what amounts to a virtuous character in our digital age. This is very much the Nietzsche version of hubris, how you reconcile the different contrarians that we all face within ourselves and in society and hubris being described as a vice and as a flaw in that character.

GEOFF MULGAN: I would love there to be a single billionaire who could say something intelligent about their class and its place in the world and what might be done to fix the billionaire problem. It is very striking. A lot of them are very smart people—I know quite a few of them—but not a single one has said anything thoughtful about them as a group. They will issue any number of proclamations and manifestos on the future of tech or the singularity of this and that, but they are rather lacking that reflective wisdom which I think we need. The essence of a virtuous character is to be able to stand back and see yourself in that bigger picture with an ethical lens.

ANJA KASPERSEN: What will you recommend our listeners to think through? What are the questions we should be asking as we enter into 2024?

GEOFF MULGAN: I am in early 2024 launching a new organization focused around the design of institutions. I think part of our problem is that we are lacking the crucial public institutions we need, certainly in relation to AI and to a large extent in relation to climate adaptation and in relation to the governance of science more generally.

We are trying to create a team. We have some foundations involved, the United Nations and others, to focus on what are those needs and what should these new institutions look like. They can look very different from ten or twenty years ago because they can use technology in radically different ways from the bureaucracies and the international governing organizations of the past. I would love some of your listeners and colleagues to help us on that journey because if we do not create those new institutions it is quite hard to see how we solve these problems.

That is one of the lessons of history. Progress often is embedded through new institutions, not just through new laws or new programs. It is through things taking institutional form. Examples in the United States are things like the National Aeronautics and Space Administration (NASA) in space, or Defense Advanced Research Projects Agency (DARPA), the inventor of the Internet. These were institutions. The results they achieved probably would not have happened without those institutions.

The exam question is, what is the equivalent for the 2020s and 2030s which could have as big an impact on our lives a generation into the future?

ANJA KASPERSEN: Asking what institutions do we need looking ahead.

GEOFF MULGAN: What do we need, and what will they look like to maximize their use of intelligence of all kinds, to be answerable and accountable to the public, and to be swift and agile. These are I think what we want of the next generation of institutions. It is not an easy question to answer, but it is the one which most needs some really serious brainpower in the near future I think.

ANJA KASPERSEN: With your assistance I should say because this builds on the work that we were doing together, I launched the idea some months back that we need “middleware,” and middleware, as you know, working in an engineering department now, is of course a way of describing what you need to bind different things together, so it is a computational term more or less to describe interoperability but interoperabilities allowing different systems to function together, but you also need that middleware—not the hardware or the software—to actually allow the hardware, software, and all the other components to work together.

So it is not just a question, if I understand you correctly, just to create new institutions but also to create institutions that actually bring together the best of what we have.

GEOFF MULGAN: Exactly. This is the core of it. When we use the language of the “mesh,” which is basically the same as middleware, that institutions need to be meshes of vertical and horizontal, they need to be responsible for the knowledge and data ecosystem around them, not just what is within their boundaries. They need what we call “outside-in” approaches, where they take account of their impact in the world and the voices of the people affected. There is a whole series of design principles to those which dominated 20th-century bureaucracies and 21st-century private companies. It is exactly that kind of meshing middleware which is the space where these things are needed.

They are not necessarily so hard to create in that space because they are not necessarily directly challenging established power and established hierarchies, but they can be incredibly effective one layer below the surface, operating quietly in the way that standards bodies have done in the past perhaps.

If your country is one of the 40 having an election next year, ask the politicians what they will do to train themselves and the next generation of politicians to be ready for the big challenges they will face.

ANJA KASPERSEN: Fantastic. What better note to end this conversation on? Thank you so much, Geoff.

GEOFF MULGAN: Thank you, Anja.

ANJA KASPERSEN: Geoff, our conversation has been incredibly rich and thoughtful. My deepest thanks for sharing your invaluable insights and expertise with all of us.

To our listeners, thank you for joining us, and a special shout-out to the dedicated team at Carnegie Council for making this podcast possible.

For more on ethics and international affairs, connect with us on social media @CarnegieCouncil. I am Anja Kaspersen, and I genuinely hope listening to this conversation has been worth your time and left you with something to ponder. Thank you.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

NOV 28, 2023 Podcast

AI and Consumers, with Helena Leurent

In this far-reaching conversation, Consumer International's Helena Laurent and Senior Fellow Wendell Wallach outline the challenges that AI poses to consumers.