Technological Progress, with Simon Johnson

Oct 19, 2023 49 min listen

In this episode, host Hilary Sutcliffe explores . . . technological progress from another angle. Does technology increase prosperity, make our lives better and create lots of new jobs? Or in reality does it promote greater inequality, more badly paid jobs and exploited workers, with the prosperity going to the few and not the many?

Sutcliffe explores with Professor Simon Johnson the lessons of over a thousand years of technological progress and they discuss the practicalities of what he calls a more "human complementary" approach to what technology may be.

Professor Johnson is an economist at MIT and co-author with colleague Daron Acemoglu of a new book, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity.

HILARY SUTCLIFFE: Hello and welcome to From Another Angle, a Carnegie Council podcast. I am Hilary Sutcliffe, and I am on the board of the Carnegie Council Artificial Intelligence and Equality Initiative. In this series I get to talk to some of today’s most innovative thinkers, who take familiar concepts like technology, democracy, and human nature, and show them to us from quite different angles. What really excites me about these conversations is the way they challenge our fundamental assumptions. Certainly for me—and I hope for you too—their fresh thinking makes me see the world in a new way and opens up a whole raft of possibilities and ways of looking at the future.

Today we are looking at technological progress from another angle. I don’t know about you, but I hear a lot about how technology innovation is going to increase our productivity, make our lives easier, create lots of new jobs, replace the old ones, and we are just left as a normal byproduct of progress and growth.

At the same time, I see much more clearly it seems the greater inequality, the more badly paid gig economy jobs, exploited workers, and not much sign of this prosperity that we have been promised. Very fortunately my guest today has spent many years researching and observing this phenomenon of technology progress and can shed some light on whether this is just my imagination or is a reality. I am delighted to welcome Professor Simon Johnson, an economist at the Massachusetts Institute of Technology (MIT) and coauthor with his colleague Daron Acemoglu of a new book, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity.

Welcome, Professor Johnson.

SIMON JOHNSON: Thanks for having me.

HILARY SUTCLIFFE: My question here about technology progress is very focused on the here and now and about how this time it is all going to be different, but what is so fascinating about your book is that it steps out of that. This book promises and delivers an astonishing thousand-year overview of the history of technology and progress, and a clear picture seems to emerge of the dynamics of innovation.

This is a big ask, but could you take us through a helicopter view of these last thousand years and the conclusions that you have reached?

SIMON JOHNSON: Absolutely. I think the last thousand years, Hilary, are characterized by a couple of phenomenon, “repeated elements,” if you like.

One of them is that for most of this thousand years in Western society, the societies that grew out of Western Europe, we have been innovating. We have always been innovative, even in some of the darker periods of our history. If you go back 10,000 years I think you could make the argument that many societies have also been innovative. Some of them ran into brick walls, which we might discuss, but these Western societies became for various reasons creative and innovative, and there was a lot of bubbling up of ideas.

The problem for a long time was that that creativity and innovation did not become shared prosperity. In the Dark Ages, for example, there were lots of new ideas, lots of changes in agriculture, and lots of improvements in commerce, but what did the ordinary people get out of it? Nothing much in terms of living standards. However, a number of very big cathedrals were built off the money the elite got from the improved agricultural productivity. That did not do very much for the material well-being of ordinary people.

What changed only in the late 19th century is that this innovativeness, this creativity—even in the first 200 years the Industrial Revolution delivered very little to ordinary people, but after the 1850s those benefits become shared much more broadly. If we had written this book in 1980 I think we would have said: “Problem solved. We have figured out how to turn an innovative culture into shared prosperity.” That would have been the view in 1980.

Unfortunately today, almost 45 years later, we have to say: “Oops. We seem to have receded. We are still creative, we are still innovative, and we are still coming with new products, but the shared-prosperity piece has slipped away from us again.”

The question is: How do we get back on that track, that hundred-year period where things worked much better for most people? Can we get back there? Is there something inherent in modern technology that makes it harder or maybe easier to get back? Those are the questions we deal with in the book.

HILARY SUTCLIFFE: I was intrigued that you also dig very deeply into particular examples that I think shed some light onto that question of how do we learn from the past today. Do you want to take us through a couple of your favorites?

SIMON JOHNSON: My absolute favorite of all time is Edwin Chadwick and the creation of modern sanitation because everyone has forgotten this and it is absolutely crucial to how the standard of living turned around in the 19th century and how health conditions were improved. It actually made it possible for us to live in very concentrated numbers in large cities, which are also in many cases hubs of creativity, without killing each other through infectious disease.

In the mid-19th century, as you know and as I am sure many of your listeners know, health conditions in places like London were absolutely awful. There were many people living together in crowded conditions, and the toilet situation was awful. It was even worse in Manchester or Sunderland, these brand-new industrial cities.

Along came this chap, Edwin Chadwick, who was actually a Benthamite, and the Benthamites were very focused on a utilitarian view and what was efficient. I think it is fair to say they were not very strong on empathy.

Chadwick asked: “Why are all these poor people dying? Why are they not showing up for work?”

“Because of this infectious disease.”

“What’s the root of that disease?”

“Well, it’s the fact that sanitation is awful.”

“Let’s take existing technology, recombine it, and have water flow into people’s homes on a continual basis and use that same water to flush out the human waste and dispose of it safely in processing plants away from where everybody is living.”

It sounds straightforward when you say it and it may be a bit obvious to modern people, but it was actually not obvious. It was immensely controversial. It required of course substantial investments, including from the public sector—it is public/private partnerships throughout this—and it transformed how we live in Western cities and how many people live—sadly not everybody has access to this—around the world today. It was absolutely technology focused on improving people’s lives. We can call it “social progress.”

The Benthamites were not big on social progress for the sake of it; they were big on, “Let’s have more people work productively; let’s have more efficient processes.” Fine. Turns out we can actually bring those things together and deliver on something that has changed the world.

HILARY SUTCLIFFE: Very interesting as well, the Industrial Revolution. I often get called perhaps a Luddite, and we hear a lot about Luddites at the moment. The Luddites then were saying a lot of what those of us who are questioning the trajectory of certain types of innovation and the lack of focus on social progress and individual empowerment are saying now. You have an interesting whole area about the Industrial Revolution and the Luddites. Give us a bit more on that.

SIMON JOHNSON: There were several episodes of so-called “Luddites” at machine breaking. I think the one that is the most important as a reference point for today is the one that happened in the early 1810s. The anger there was against the power loom, which was being used for weaving.

What had happened was that spinning became industrialized and mechanized in the 1700s. Spinning for cotton in particular—cotton was well-suited to the machinery that developed. Cotton created a fabric that was very appealing somewhat to an extent to British people but very much to people who lived in warmer clients, so it became a big industry to import cotton from India, spin it in Manchester, turn that yarn into cloth, and export the cloth around the world.

The process of weaving was done on handlooms, so they were machines, but they were very simple machines. They were in people’s houses. There was a system of outsourcing basically—or “outwork” as it was called—and there were merchants who coordinated this, but it was done by people, often men, who were quite skilled. This probably started as a part-time job in many parts of Lancashire, for example, but by the early 1800s it was a full-time job.

Along comes the power loom, and that takes the work away from the handloom workers. These are skilled people who have built up a trade, and between 1780 and 1800 they made a lot of money. It was known as the “golden age of weaving,” and all of a sudden they are wiped out. The key question in all these episodes of automation is: What new jobs are you creating?

Some of the existing tasks are going to the big machines that are now in factories, and those factory machines are going to be tended by children, so child labor is on the big rise here, and women, who have always worked very hard, but now they are working hard in these factory conditions. Some men are going to have jobs as overseers in these factories, but a lot of men have now lost opportunities. There is no unemployment insurance, it is very hard to move out of Lancashire, their incomes go down a long way, and they are extremely angry because what are the new tasks? What else can they do? Where can they go to work?

The answer is nowhere, so the number of handloom weavers remains very high. There are about 250,000 at the beginning of the 1800s, and there were still about 250,000 by 1830, but their incomes dropped like a stone, so of course they are angry. These men wanted to work. They wanted to be productive.

Stopping the power looms did not work out, but it is always the case in all of these episodes that people are looking for what is new, and you need a system, a process that is generating new tasks, particularly new tasks that require skills. That is when you get better-paying jobs, and that was absolutely missing in the early 1800s, 1810s, and missing still in the 1820s. It is only with the coming of the railway age after about 1830 that the broader economy begins to turn around.

HILARY SUTCLIFFE: What was depressing and intriguing about your book was that it is actually how technology really did take jobs without replacing them with new jobs until, as you said, there are two or three pockets of hope at different times. Tell us a bit more about the conditions. What was the context that actually made some areas more progressive for more people and some not?

SIMON JOHNSON: A number of things start to go right in the second half of the 19th century. The first one I would emphasize is the railway age. Railways reduced transportation costs, people could move around, they could see what was happening in other places, it generated a lot of jobs, and many of those jobs were quite well-paid because of the responsibility that comes with being a signalman, for example, on the railway. So the railway is a big boost.

The second is actually the arrival of the Americans on the scene as an industrial power. In 1851, at the Great Exhibition in London, the Americans basically had two things on exhibit, guns and some stuffed animals they had shot with the guns. That was it in 1851. By 1890 the United States is the leading industrial power in the world.

How did they do that? It was not using skilled labor. Skilled artisans tended to stay behind in Europe. What the Americans had was a lot of unskilled labor, and what they did in their industrialization was find ways to make those unskilled workers more productive and give them skills. “On-the-job training” I suppose we would call it now.

They became skilled, they became better paid, and this combines with the third element, which is stronger political rights for workers. I do not want to exaggerate how quickly democratization came to Britain or to the United States, but between 1850 and 1910 there is a rise of real equality in terms of voting rights, there is a rise of trade unions—there are also conflicts about trade unions—and of course there is a rise of women’s rights.

If you combine these elements, you are getting more pressure for higher wages, better working conditions, and better living conditions, and you are combining that with better industrial technology and higher productivity. That combination is what delivers higher wages and a better standard of living, and that becomes the basis for the shared prosperity which subsequently—with some bumps in the road—delivers during the 20th century.

HILARY SUTCLIFFE: Let’s zoom in to today. That concept that you map across the thousand years, that sort of success model, if you like, can you talk us through how you feel that has and has not worked over the last let’s say 30 to 40 years?

SIMON JOHNSON: A couple of things have gone wrong with this model. Also, to set the stage very clearly, the 1950s, 1960s, and 1970s were very good for shared prosperity in most industrial societies, so Western Europe, North America, and of course Japan gets in on the act also. So you have the shared prosperity, but what then happens in the 1980s is a shift in the ideology of business. It becomes quite a bit harsher and I think anti-worker might be fair to say. It is certainly not pro-worker.

Second is that we have this digital transformation, so the digital technology arrives and automation comes into factories. That pushes out people who were middle-paid and middle-skilled but does not create new opportunities for people with those skills. What it does is push them down into the lower-skilled jobs, where they are competing with people who do not have much skill to start with, and that depresses wages. This is a big part of the widening wage inequality that we see.

Of course you are going to layer on top of that globalization, so increasing trade and competition from low-wage countries. Remember that globalization is made possible by big improvements in technology including telecommunications and transportation and is linked by all that digital technology. So we have technology driving directly wage inequality and facilitating globalization, and those are the big drivers of that widening income inequality and the disruption and some people would say the death of the shared prosperity model in the European tradition.

HILARY SUTCLIFFE: Talk to us about how there was a deliberate, conscious aspect to creating shared prosperity than basically making the rich richer as we have found throughout history. In terms of now, I think you said we in the 1960s and 1970s we saw something, but looking at the very present day, the really high-tech digital economy we have now, in my work I hear quite a lot about how we need to deliberately create technology for social progress, we need to deliberately create technology to empower people, and yet, back to my earlier question: Are we seeing that? Are we not seeing that, or are we seeing a little bit of both?

SIMON JOHNSON: I think that is a very good framing. I think that is the top policy priority when it comes to technology, and technology drives pretty much everything in our societies, and I think the key point here is that there always are alternative paths for technology. You can make choices, you can go more in one direction, or you can go more in another direction. That is the right framing.

To answer your question directly, I think where we are currently is very heavily weighted toward automation first, bring in the machines and the algorithms to replace workers, and let the chips fall where they may from that. I think that is very dangerous. That is a continuation of this digital divide that came out of the 1980s.

It does not have to be that way, Hilary. There are a lot of other innovations that could be pursued that are what we call “human-complementary.” Now, all technology is complementary to some humans, but what we mean by human-complementary is it boosts the productivity of people who have middle skills, people like electricians, nurses, and teachers, people who are not intense technology specialists and do not have a Ph.D. in engineering. These middle-paid, middle-skilled people can become a lot more productive using generative artificial intelligence (AI) and its cousins and what comes next, but that is not the current focus. The current focus, which is dominated by a few big tech companies and dominated still by the version of capitalism that emerged from the 1980s, that path of technology is going to be pretty damaging to other people.

HILARY SUTCLIFFE: It is peculiar. I think about and write about sometimes this slightly weird moment that we are in where we are trying to make machines more like people, so we are surveilling people. Amazon workers have to work like a machine. You are actually being made into a machine almost. Then we are trying to make machines more like people, so we are trying to get generative AI to be more like a person.

It seems to be happening separately, whereas like you, machines, digital technologies, in fact all technologies—I get involved in a few of the others as well—really should be there to enhance the power and capacity of people. How have we gotten into this really weird polarized land that does not actually see the human as perhaps the center that needs to be supported, helped, and empowered, and yet we see the machine at the center?

SIMON JOHNSON: Let me give two answers to that, both of which are rooted in history. One is some things we have forgotten and the other is some things we remember far too well. The thing we have forgotten is what happened in the big transformation in the early 20th century, the arrival of the assembly line. Henry Ford was a big driver of this.

Henry Ford was a difficult character, and I am not endorsing his personal views in any way, but if you look at what Henry Ford did, he put car production on the assembly line and then he brought electricity to the assembly line. When Ford started car production in Detroit in the early 1900s there were about 40,000 people working in that industry, producing about the same number of cars. It was artisanal work. Ford automated a lot of those tasks.

However, when he and all the people around him put those tasks onto the assembly line they also created a lot of new tasks, and those new tasks employed by the end of the 1920s 400,000 people. So you have ten times as many people working, you have moved routine tasks onto the assembly line, but you have created a lot of tasks that require human judgment, discretion, and communication. Those are the new tasks, they require skill, and are quite well paid. Car production of course has gone up from that 40,000–50,000 level to about 3 million cars at the end of the 1920s in the United States.

To get the benefits of automation it needs to be combined with a process, a system that generates new tasks and those new tasks have to require and help people develop skill. That is the magic place. Answer one is that we have completely forgotten that.

HILARY SUTCLIFFE: Do you feel Henry Ford had a pro-society approach in mind or was it just a lucky accident for him?

SIMON JOHNSON: Henry Ford worked with Thomas Edison. Edison of course was this genius inventor who did not invent or discover electricity but he did imagine what you could do with electricity and he did bang away at trying to improve things like the light bulb until he actually made a lot of progress. I think Henry Ford embodied and absorbed exactly that kind of mentality.

I think the dynamic of that situation, the availability of human talent, and the competitive nature of the industry they were in encouraged the creation of these new tasks and encouraged a lot of upstream work—materials going into cars—and a lot of downstream work—how you sell the cars, how you maintain the cars—and of course that led to some other things like the development of suburbs.

If I had to say exactly—and I do not want to idolize one individual—Ford’s version of capitalism was nowhere near as anti-worker as what came after the 1980s. That is also important. Ford was a bit paternalistic. Some of that is controversial, and I am not recommending that either, but he was not trying to drive the workers down and drive wages down all the time. He was actually trying to pay in many instances higher wages to retain workers and reduce labor turnover, which is a good way to get shared prosperity.

HILARY SUTCLIFFE: The car industry is an intriguing case study, isn’t it? I was looking recently at a tiny little company that is called Riversimple, which makes a hydrogen car. They wanted to rethink the business model of cars that does not have this model anymore. They have themselves a council which includes future generations. What they basically did is they reinvented the business model of cars, and one of the things they did was reinvented how it was made so they did not have to have these large factories and huge places that take over one whole city. The way they design these cars actually enables people to have them in small cities so you could actually empower different types of jobs in a smaller place.

They even started to look at how the business model of cars is really all about, “We sell it really cheap and then we make the money out of servicing and parts.” They are actually leasing the parts, so it is in the business interest of all of those component manufacturers and the cars themselves to make them last longer.

I am looking also at innovative business models that deliberately make those attempts whilst at the same time wanting to make loads of money, wanting to sell loads of cars, and wanting to do that. Do you have any other industries or organizations that you have seen that are looking to disrupt these sorts of business models?

SIMON JOHNSON: Looking at MIT, we have a lot of people who are very much looking for new ideas and disruptions on the technology side, and I think many of those will have implications for business, including more modular approaches to production, as you are saying. There is a lot of discussion around “additive manufacturing” and what that could do. There is a lot of discussion around the future of biotech and the future of pharmaceutical production, which could be done in very different ways. So, yes, I think pushing on business models is important.

Of course mass production by itself is not necessarily the problem. It can be. You can have some problematic business models, and I am not a supporter of the car industry by any means, but just producing things at scale and very efficiently is not the problem if you combine that with creating new opportunities and new tasks. It is two legs, and you have to walk on both legs. If you just focus on automation, you just say, “Right, we are going to automate everything in our supermarket and you are not going to need anybody to check out anymore,” well, where do those people go? What are the jobs that are being created? How does the system absorb that labor?

I think what we have seen so far with self-checkout kiosks in supermarkets has not been very encouraging because what you do there is basically transfer the work onto the customer, who is not paid, the customer experience does not particularly improve, depending on the particular implementation, and you do not raise the wages of the people who stay behind in the supermarket. What you do is you change the balance of power between management and labor, so the workers who remain are quite afraid of losing their jobs because it gets easier to replace them.

I think there are a lot of reasons to worry about how technology change is imagined, deployed, and implemented in the modern world and what people are going to do with generative AI.

A key point though, Hilary, is it does not have to be that way. There are other, much more human-centric paths that we could choose for technology. They are just not the ones that are being driven by the people who have taken control over the vision for the future of technology at the moment.

HILARY SUTCLIFFE: That is a great segue to your final chapters. What I really liked about your book is that a lot of academics are very good at problem framing with not necessarily practical, real-world ideas about how it could be better. Talk to us about the three concepts that you have in mind of how technology could be deployed better for a more pro-society and more human-centric approach.

SIMON JOHNSON: I think the three things we emphasize are, first of all, change the narrative. That is what we are doing in this podcast. We are saying: “Look, it does not have to be the way that any particular tech guru or billionaire says it has to be. There is lots of choice here, and society can shape technology.” Edwin Chadwick, as I said, is my favorite example.

The second point is that you need to have some countervailing power. What emerged in the late 19th century—and everybody who lives in Britain knows this—was the rise of political rights for more people, the spread of the franchise, and the rise of trade unions. Those countervailing powers pushing for higher wages is super-important. Now we do not have strong trade unions in many industrial countries today, so we have lost that countervailing power. Where are you going to get it? Where are the voices and where are the pressures for safeguards on surveillance, for example, for limiting how much monitoring there is of workers or making sure that is done in an appropriate manner that makes them safer and not a manner that exploits them more. The organization there is somewhat missing, but that is an ongoing discussion also.

The third piece is to have more visionary proposals and push people and ask: “What do you want technology to do? What is it that we are missing?” I always say to people, “Let’s focus on the problem to be solved, not what you think the technology can already do. What is it that you would like to do with generative AI for example, and then let’s work on that and figure out why that is happening or not happening or how to catalyze that from the government or whatever.” I think that is a conversation which everyone can participate in. I don’t think you need a PhD or even a degree in anything to say: “Right, I’ve got this problem, this is why it’s not happening. How can we bring technology to bear to help?”

I disagree with Bill Gates on a key point. Bill Gates says he has never met a problem that he cannot solve with technology. I am not somebody who thinks that technology is a magic bullet, but I do think technology combined with policies and social pressures can be really helpful, and I do think that is a better way to focus your technology development, and Bill Gates, to his credit, has done some of that, for example, on vaccine development.

HILARY SUTCLIFFE: If you use the word “innovation” instead of “technology,” innovation is about ideas, things and ways of doing things, then you can broaden out. One of the things for me is that technology has become only to mean digital technology or science-based technology, whereas innovation can mean all sorts of things, and some of the new ideas you talk about in your book are innovations that need more attention drawn to them.

I started doing this work in the 1990s, when it was about responsible business, then in the early 2000s when it was nanotechnology, biotech, AI, robotics, quantum tech, and neuro tech, and I just get wheeled out every five years to the next new tech to have this “What problem are you trying to solve with your technology?” conversation. Honestly the answers I get are technological progress, this great science, this great tech, virtually all the time.

So one of the things I am looking at too is how—back to your third point, in fact—what processes can we embed? What sort of incentives or metrics can we start to use to leverage this conversation so that people—your thousand years reinforced my belief that actually without forcing us to do it we are just going to relive the last thousand, so as you say we need some policy incentives, we need some metrics, and we need belief systems to make this happen.

You have some big ideas in your book about that. Give us some of the big levers you feel are important.

SIMON JOHNSON: First of all, Hilary, I think, yes, that is a good framing, and it is the right way to set priorities.

Let’s be realistic. In the case of the United Kingdom, it has been more than a 300-year struggle. What came out of the Glorious Revolution of 1688 and the rearrangement of power in Britain was this idea of what became known in the 1700s as the Whig Supremacy, which was basically John Locke’s interpretation of the legitimacy of power rooted in property. Previously you had the divine right of kings: The king is in charge and tells you want to do. That collapses with the Stuarts, but now what is the basis for legitimacy, who is in charge, who gets appointed to run things, and so on? That comes from property.

That then becomes in the 19th century making money. Property was a bit of a static view. Making money is this very dynamic view, and we have the Benthamites saying, “Let’s be more efficient; let’s raise national productivity.” That becomes the core ideology. That is why you get pulled back in Britain all the time to, “Let’s privatize it.” You are being pulled back to: “Let’s make it into private property because that is going to be better, it is more legitimate, and it will be more efficient.” You are being pulled back all the time to the 1720s, even when you do not realize it. But that is a very important point—I am not poking fun at anyone, I am not criticizing, I am just saying it as I think it is—because if you recognize that you have that strong pull you need to have a counter going the other way.

What is the counter going the other way? What are your objectives? What is the problem you are trying to solve? Is it education? Is it healthcare? Is it homelessness? Is it something else on the social scale? Unless and until you have those metrics first and foremost—I don’t know what you hear when you turn on the evening news, but the top thing I heard on the business report of the day is how the stock market has done. Really? If you focus on the stock market, was it up or down today, you know where that is going to lead. That is all about profits, that is about future profits, imagined profits, and so on. Where are your social indicators? Where is the discussion of how we have done in terms of broader impact of any kind? Those metrics are not sufficiently salient.

All of that is up for grabs, all of that can be changed, but it is a conversation. It is a struggle because you are up against—we did have the French Revolution, which laid on top of the Glorious Revolution, so we do have this notion of citizenship and equal rights, but that is not what prevailed in the 19th century. What substantially prevailed was the Industrial Revolution with the development of some countervailing powers as we discussed.

This is one piece as you say of a repeated struggle. Every time a new technology pops up you are going to have this, and I think the answer is the same: You have to push to solve the problems that are important.

HILARY SUTCLIFFE: This is obviously an international audience, so we do not want to talk too much about the United Kingdom because we are taking away human rights at the moment, but we won’t go there.

What visions do you see in emerging economies or non-Western economies that can give us some inspiration?

SIMON JOHNSON: First of all, the big worry for those economies is that the cost of running an authoritarian system has come down. If you remember the Arab Spring and the idea that social media would somehow liberate people, make it easier to communicate, and undermine dictators, well, that was just over a decade ago, and that is not what happened. That is not how the Arab Spring ended, and it is not what we are seeing in authoritarian regimes around the world. It turns out that social media can become and has become a tool to control people.

I think the real danger that the AI tech development world is that there are two poles developing, one in the United States and one in China—I understand the European Union may have some influence through its regulation, but they are not driving the development of technology—and if you think about those two poles, I have reservations about the American one, but I am even more concerned about the Chinese one because that is very focused on top-down control, use of pervasive surveillance, and the cost of monitoring has come down a lot. That technology will be sold or made available to anybody who wants it and wants to align with China obviously, and that is a lot of countries.

I think pulling those societies in that direction—and that direction has a lot of suppression of dissent, suppression of voices, and oppression of workers—trading with those countries and buying their exports when the labor in those factories is oppressed using this AI technology is very problematic. There are already big human rights issues in the world trading system obviously, but I think it is going to become more dramatic and more at odds with what I hope will develop in Western societies, which is AI with safeguards and limits on how much powerful people can exploit the less powerful because this technology has massive potential in that direction.

I think the United States will have safeguards and I am sure the Europeans will, but I think emerging markets are going to face some big choices here. I am very concerned about how it is going to play out.

HILARY SUTCLIFFE: Back to the question of metrics, as you said one of the things that always strikes me is that we hear about well-being indexes and we hear about the Happiness Index. There are lots of great ideas about new metrics that will—almost a different conversation—force different incentives. Do you have any favorites or any examples that you see that give us a little bit of hope in that direction?

SIMON JOHNSON: I think the overall metric discussion has not gone particularly well. I do not think those have gained traction. I think they are a bit too vague for many people.

I think focusing on what happens in education is a good one. Education is very concrete. Education is measurable. I think focusing on what happens in healthcare is very important, quite salient, and you can get compelling numbers on that.

I think the key numbers are really wages and wage distribution of what happens to those middle incomes, to what extent are people in the middle prospering, to what extent are people at the bottom able to move up, or is the middle being squeezed and you are getting a big dichotomy? We have data on that. We can see that happening.

We know that COVID-19, for example, for all of its terrible, evil effects did help some people at the lower end of the wage distribution move up, so there has been some unraveling of the previous three decades of widening of income inequality that came from the COVID-19 effect. Will that be maintained? Is that something that is just a temporary blip? That is a key discussion, but that is a well-informed discussion that is extremely anchored in the work, for example, of my colleague at MIT, David Autor.

I think we do not have the headline number for the evening news, but we do have some metrics that are serious, well-documented, and could be made more salient.

HILARY SUTCLIFFE: I was intrigued that you are not a great fan of universal basic income, this idea of how do we make jobs and how do we help those people who inevitably will have their jobs replaced. A lot of people are putting a bit of hope in universal basic income. Tell us a little bit more about that and what you think is right and wrong with that approach.

SIMON JOHNSON: I think people want to work, not everybody all the time, and obviously people want reasonable jobs and want decent compensation. I think with work comes status and identity. If you create a class of people who do not work and receive income irregardless, that is not going to be a politically strong group of people I would suggest and I think they will get squeezed over time. I would be skeptical that those real incomes would keep up.

I think the political economy and my understanding of human identity suggest to me that we are better off if we can create more good jobs as opposed to saying to people: “Here is a bit of money. Don’t worry about working.”

HILARY SUTCLIFFE: We will not go into whether that was their design or not, but I am interested in this idea of the “good” jobs because I look at a lot of these jobs, and this is from the perspective of incredible privilege in sitting here in that fancy top bedroom, but some of the jobs that we are trying to save are bloody awful jobs, and some of the jobs that we are creating are awful jobs, and people could do more fulfilling things with their lives than some of the jobs that we are creating. We won’t even go into things like “content moderation.”

I am interested in this concept of making good jobs because what I see is that a lot of the jobs are just dreadful jobs. How do we incentive the making of good jobs? How do you say what is a good job and not a good job? I think perhaps things like gig economy and employment rights and those types of things obviously have a part. What are your thoughts on that?

SIMON JOHNSON: This is why we emphasize the need and the opportunity to develop human-complementary technology including using generative AI. There is a cadre of workers that we call “modern craft workers”—an electrician, a nurse, a nurse-practitioner, a teacher—all of these people could be empowered with technology to do their jobs better and more effectively—exactly what we talk about in terms of better would obviously depend on the specific context—and that is not what is happening.

I think that is the key point. You want people to be able to do more with their existing skills and to learn new skills through using a technology, and that will lead to higher wages for them.

I completely agree that many of the jobs we are automating—very tough manual-work jobs, for example, leading to breakdown of people’s health in their 40s and 50s—it is good to automate those jobs. I also agree that some of the jobs we are creating, like in warehouses—Amazon warehouses are notorious for this, but they are not the only ones—are tough jobs, and if you can automate more of that work that could be a good thing, but where are the new tasks you are creating? What are those people going to be doing? How productive are those people going to be? What you do not want to do is push people out of warehouses into other low-skilled jobs where they are competing with people who do not have much skill, because that just drives wages down.

Creating new tasks through trying to develop technology deliberately to complement human capabilities and complement all of humans irrespective of how much formal education they have I think is a very legitimate focus and emphasis to have.

HILARY SUTCLIFFE: That comes back to the first of your three points, which is this idea of changing the narrative. You and I both talk about this not being inevitable. We have to rise up, the citizen, the civil society, and yet here we are with generative AI. I feel that it is being done to me and I have not got a hope of changing it, but I see also glimmerings—to your point of changing the narrative—you call it “machine usefulness” or “humane tech,” there are all sorts of names, “responsible innovation,” to say, “No, we don’t want this technology used in this sort of way.”

Let’s just finish on this. We have both talked about this. I am still surprised, looking at your thousand years, that we do not seem to have learned. Being wheeled out to all these different technological things since the 1990s. We are making the same mistakes all over again.

I wonder what your view is. Why are we psychologically and systemically still making the same mistakes over and over again and expecting a different result?

SIMON JOHNSON: That is John Locke, the Glorious Revolution of the Whig Supremacy. That is the 1700s. That is the power of this idea of property and then making money as the source of legitimacy and power.

That is an anchor in our societies, Hilary, and it did have some benefits, but it is also a bit of an albatross around the neck now. To break from that, read the history of computer science, which we have in our book. There is a longstanding tradition of people saying, “Hey, we can make these computers more useful to more people.” However, the way things played out, the way the commercial models developed, the way that Microsoft, for example, scaled up and took over, took us on a different path.

There are many giants on whose shoulders we can stand to say, “Hey, let’s make things more human-complementary, let’s raise the productivity of workers, let’s boost their pay,” and that was a vision—I would say arguably one of the leading visions—of capitalism in the 1920s, for example, in the United States.

What we got in the 1980s from the work of Milton Friedman and others was this very nasty, harsh version: “You’ve got to squeeze the workers. The workers are a cost to be minimized. Drive the workers down.” That is not the way capitalism was previously organized after World War II. It is not the vision that we had from the 1920s. It is not the way it has to be. Those are choices, powerful choices anchored in that 18th-century experience, but we can break free from them, and, you know what, generative AI is an opportunity to break free precisely because it is technology that can be developed by many people in many different directions.

Unfortunately of course right now a lot of it is controlled by two big companies. Daron and I talked to people in and around those companies, very positive, pleasant conversations, but those companies are continuing to press very hard down this “squeeze the workers, automate; creating new tasks is someone else’s problem.”

HILARY SUTCLIFFE: I am just reading this morning interviews with OpenAI about their vision, but it was so techno-centric. It was only about that.

People do say the reason we are making such a big fuss about the problems with generative AI, which is clearly a democratizing technology, is that it is the creative jobs, the middle-class jobs, that are going rather than those who can be ignored.

SIMON JOHNSON: That may be what happens, and I agree that is the path we are on. It does not have to be that way. Exactly those kinds of creative jobs could be ones which are made more productive and we could have more of them, but not if you use generative AI to wipe out the job of Hollywood screenwriter, for example. That is a current topic. You could destroy a lot of those jobs right away.

Why would you do that? Why are you taking human creativity out of the writing? Only if you want to drive costs down. That is a bad mentality, but I agree that it is a very powerful political and economic mentality.

HILARY SUTCLIFFE: You have to give us a little bit of hope to finish with, Simon.

SIMON JOHNSON: I am giving you a lot of hope, Hilary. I am saying it does appear that way, we choose the path of technology. It is fine to recognize the strong historical weights on our existing path, but we can break free. We did it before. From 1850 or 1950 or 1980—I understand there were a couple of depressions in there; you have to get the macroeconomics right—in terms of technology development and in terms of the impact on jobs that was a good century and a bit.

We turned away from that. We made other choices as a society or as a set of societies, particularly in the industrial West. We can do a lot better, and this new technology is an opportunity to do that, so I think we should be pushing ourselves to develop more human-complementary technologies, where human-complementary means complementary to all humans and not just people who have PhDs in engineering.

HILARY SUTCLIFFE: Actually I am a most optimistic person, so I do not know why I got myself in that little gloom moment there. Others have been on our podcast, Claudia and Jon, looking at how to change the system, how citizens can drive the agenda, how, as you say, with your countervailing powers how civil society organizations and ordinary people can actually say: “Look, we really have had enough now. It has to be different.” What is great about your book, as you say, is that it has happened before and it can happen again.

It is really exciting to talk to you. I urge everyone to read Simon and Daron’s book because it is both inspiring and very thoughtful about how we got where we are and how we can dig ourselves out of this hole that we are digging ourselves into and being a more productive use of technology for the future.

SIMON JOHNSON: I think you said it very well, Hilary. I appreciate your time. Thanks for all the work you have done, and thanks for keeping at it.

HILARY SUTCLIFFE: Fantastic. Thank you very much, Professor Simon Johnson. It is a great honor to have you on the podcast. Really good luck with your work and really good luck with your book. I think it is going to be a pivotal book, and I am reading all the criticisms as I was coming to talk to you, and I can see people taking onboard what you are saying. More power to you, and good luck in the future.

SIMON JOHNSON: Thanks very much, Hilary.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this podcast are those of the speakers and do not necessarily reflect the position of Carnegie Council.

You may also like

NOV 16, 2023 Podcast

Ethics, with Christian Hunt

In this episode, host Hilary Sutcliffe explores ethics from another angle, with Christian Hunt, author of "Humanizing Rules: Bringing Behavioural Science to Ethics and Compliance."

NOV 2, 2023 Podcast

Trustworthy Tech Development, with Julie Dawson

In this episode, host Hilary Sutcliffe explores the practicalities of how a company can provide evidence of trustworthiness with Yoti's Julie Dawson.

JUN 13, 2023 Podcast

Accidents, with Jessie Singer

In this episode, journalist Jessie Singer challenges our conventional thinking on accidents, discussing why the majority of accidents are predictable and preventable.