AI as a Tool for Workers' Empowerment, with Christina J. Colclough

Nov 13, 2020

Following up on the AI & Equality Initiative's first webinar on artificial intelligence and the future of work, Carnegie-Uehiro Fellow Wendell Wallach and Dr. Christina Colclough, founder of The Why Not Lab, build on that discussion with a conversation about the future of the worker. How can new technology be used to empower workers? What are some progressive strategies and policies that can help to reach this goal?

WENDELL WALLACH: Hello. I'm Wendell Wallach. I am a Carnegie-Uehiro Fellow, and the co-director of the Carnegie Council's new project on artificial intelligence and equality. Through this initiative we will be exploring the many ways in which artificial intelligence (AI) might be used to ameliorate existing forms of inequality or perhaps may even exacerbate structural inequalities and create new forms of inequity.

This is the second in our online series of discussions. In the first we talked with James Manyika, who is the director of the McKinsey Global Institute, and he told us about McKinsey reports on the future of work in the United States and in Europe and the future of the social contract. It was a mixed story in that he was pointing out how actually artificial intelligence may create more jobs than it decimates, but those jobs will largely be ones that require a high degree of expertise, or they will be very low in the ranking of the service area. In other words, artificial intelligence could hollow out middle-level jobs.

Today we are going to hear from Dr. Christina Colclough. I have known Christina for many years. We encounter each other at international forums hosted by the United Nations, the Organisation for Economic Cooperation and Development (OECD), and many other organizations. She has been a tireless advocate for workers' rights and for helping all of us fully appreciate and understand the ways in which digital technologies are going to affect workers.

She is regarded as a thought leader on the future of workers and on the politics of digital technologies. She advocates for the worker's voice. She runs The Why Not Lab and has had extensive global labor movement experience, and she has led the UNI Global Union's The Future World of Work policies, advocacies, and strategies for a number of years. Christina was the author of the union movement's first principles on human data rights and the ethics of AI.

Thank you ever so much for joining us today, Christina. Let me turn the webcast over to you.

CHRISTINA COLCLOUGH: Thank you, Wendell, and as always it is a pleasure to share in this case a screen with you. As you have said, we have had numerous opportunities to have talks on and offstage.

Hello, everybody. I am going to be talking a little bit about from the worker's perspective what are some of the challenges, the negativities we are experiencing right now, and what could be some of the solutions.

When Wendell was saying the "worker"—I can't see you all—I was wondering how many of you are actually thinking of the worker as somebody different from you, that the worker may be a street cleaner, or the worker is the front-room worker in a retail shop. The fact is that many of us, the majority of us, are wage earners. We are workers. Yet we have come to believe that we are something different than that. So when I'm talking, really try to think about yourself, your peers, and your colleagues, and not just the workers who are the ones doing the jobs that we are secretly hopeful that we don't have to do. Understand that this narrative of the worker is not just the blue-collar worker, it's all the way up to the academic and to the professionals, as I assume many of you are.

There are lots of challenges facing the world of work or facing us in societies right now. We have devastating climate change. We have demographic changes; the growing elderly population will be very soon with a lack of nurses and lack of care workers, a predicted lack of 21 million worldwide care workers. We have to ask ourselves what is going to happen to especially female labor market participation.

We have the geopolitical issues that we all know too well right now, which are really disrupting the world, a whole movement of power from the West to the East. We have, of course, this pandemic and the devastating effects it has on the here and now, on our friends and families, but also of course on the medium- and long-term effect on our jobs and our societies.

Then we have technological change. This digital change, AI and the digital economy, is really just one of these numerous forces which are infringing upon us as workers.

Let me remain for a while in this negative narrative and spend some moments with you here focusing on the real experience and the devastating facts surrounding the world of work. I wish to highlight some of the assumptions, the narratives, but then also question them.

I can hear some of you thinking: Ugh, Luddite. She doesn't like technology. Well, that's far from the truth. I am actually a technological optimist, though I am at the same time very, very critical of the current trajectory we are on. I want to urge all of you as I am speaking to think: Does it really have to be like this? Could it be different?

In more or less random order, let me get going on some of these negativities and some of these striking contradictions that we are experiencing in the world of work.

First, the rising individualization and precariousness of work. Across the world, more and more people are left on zero-hour contracts, in the gig economy, or on short-term contracts, very fluid rights. But also they are left to bear the brunt of the market, so to speak. They are the ones who have to shoulder the fluctuations in supply and demand or the demand for their labor, yet we are experiencing that they have no rights. Many workers are stripped of any social and fundamental rights, so they are left in an enormous income insecurity, not knowing whether they can pay their bills from month to month.

This mismatch between the social systems and the current labor market is all too evident, for example, in the Yes vote to Proposition 22 in California. This third category of worker that is now being introduced has been in place in the United Kingdom for many, many years, and it has not worked there, and it certainly won't work in the United States either. Work is work. My assumption is that no matter how that work is conducted, under what contractual or noncontractual form, all workers should enjoy the same social and fundamental rights.

Then again, as I said, who is this worker? Who are the workers? Even in a room full of workers, when I ask the floor, "Who in this room is a worker?" I seldom get more than 70 percent of the hands up. This is one of the tricks that we have been dealt, to believe that we are something different than a worker, and this is probably a big explanation for the decline in trade unionism—"Why should I go together and collectivize with others when I have been told I am something special and something more?"

The fact is that we are seldom stronger than the weakest link, and this whole perception of us being different from a worker I think has a lot to play. But it also has the highly unacceptable union busting. Employers are spending millions every single year on busting unions. For me, this should be forbidden. Any indirect or direct union busting has no place in our modern societies.

If we look at the futures of work—the plural is there deliberately; there will not just be one future of work, there will be futures of work as we experience them differently from wherever we are in the world according to the jobs that are available and the skills that we have—one of the things employers seem to agree with the workers on is the necessary reskilling and upskilling that will have to take place. If you look at the rhetoric around the world, they all latch on to the reskilling and upskilling as if the future of work could be solved through that. But they never mention who is going to pay.

How are we going to combine this increasing individualization and precariousness of work with this lack of funding for reskilling and upskilling? Can we expect an Uber driver to take two or three weeks with no income to go on a reskilling and upskilling course? No, of course we can't. So when employers talk about reskilling and upskilling, my answer is, "Are you going to pay?"

They also somehow believe that digital technologies are a given, that the rest of us have to react to the technology that is coming, that the technology is somehow superior to us as humans. This "technological solutionism," so to speak, is very, very dangerous. We hear it in the talk of, "Oh, the robots are coming!" as if it's a civilization greater than ours that is going to come and control us.

Again, we are not powerless here. The rhetoric somehow is dumbing us into a belief that we cannot really do anything. Yet my claim is that it is precisely these technologies that we should govern and frame. They are not necessarily born evil, but they are not necessarily born good, either. If we want the world that we seem to be creating in all of these AI principles and ethical AI thoughts, then we are going to have to govern these technologies so that they serve people and planet, and not just some people but the majority if not all people.

This links into the whole discussion of automation, the job losses, the fears and how Frey and Osborne's study has been grossly misrepresented, to be honest, that 50 percent of all jobs are going to disappear and we are going to have devastating impacts on the world of work. Well, yes. If we continue down this current trajectory of doing nothing, then we very well might see, as Wendell was saying, this hollowing out of the middle-level jobs, middle-income jobs, and the polarization of the workforce. This might very well happen, but again it is not a given.

We could be demanding of companies when they invest in disruptive technologies that they also are obliged to invest in their people, in their reskilling and upskilling, and in their career paths.

But many companies—and we have seen this during the COVID-19 crisis—are investing heavily in semi-autonomous systems in the hunt for productivity and efficiency. Our markets, as they are structured right now, call for this quarterly shotgun capitalism of proof of earnings increasing all the time.

But I want to question this and ask: Are we producing ourselves to hell? If we look at our climate, if we look at the devastating impact this overproduction has had, are productivity and efficiency the goals that we should be striving for? Is it time that we move, as many are calling for, beyond gross domestic product as a measure of success? Imagine what our economies, what our policies, and what our markets could look like if we committed to the United Nations Sustainable Development Goals.

Across the world we are seeing rising inequalities, and we need to address these inequalities, inequalities between genders, between identities, between ethnicities, and between races. All of the research has shown that unless we really learn to govern the digital technologies, the algorithms, and the data sets, the bias and discrimination inherent in these systems are only going to accentuate the inequalities that we already experience. From predictive policing to the calculation of welfare benefit dues to automated hiring systems, again and again we see in the juridical system inequalities and bias being shaped.

At the same time, we seem to believe that these systems are efficient. Well, of course, they are not, and many, many scholars and many practitioners are flagging this, but it is a call again for the regulation and the framing of these systems.

Autonomous tools need governing. In the world of work, I would hate to experience—which I think we already are in many ways—that a worker who is looking for a new job will not see certain jobs online in the job announcements because that person, a priori and by an algorithm, has been deemed unfit for that job. We must never have an algorithmic tool, an opaque system, which we don't even know exists determining our life and our career opportunities.

The same in the autonomous hiring systems: Who gets hired? Who gets fired? On what data are these tools built? Do they match? If an autonomous decision-making system designed in the United States is deployed in Kenya, is it matched to the Kenyan culture, to the institutions of Kenya? I doubt it.

We cannot turn a blind eye to discrimination that is potentially happening. Many of us who work in this look at the Social Credit System in China and shudder and go, "Ooh, that's too much," but again I want to provoke a little bit here and say: "Don't we already have that in our parts of the developed world, but it's not run by an authoritarian state but by numerous private companies known as well as unknown?"

Before I end my rant here on some of the negativities and some of the things we have to be careful of, I want to stress that no biological social system has functioned on homogeneity. We need diversity. We need to work together to ensure that our labor markets are diverse and inclusive. Yet at the moment we are segregating, we are undervaluing, and we are underpaying the work of many of our peers.

With COVID-19, with this skyrocketing demand for surveillance and monitoring software, and with the rising awareness that we are being turned into objects, into numerous data points that are being used to make influences on us, what will your next move be? Is she likely to vote to the left or to the right?

Then we also have to understand that contrary to what the International Labour Organization (ILO) decided in 1944, that labor is not a commodity, we are becoming commodified. We are becoming objects that are fed into these systems regardless of who we really are.

We must never, never accept that this is the case. Then we will lose our autonomy. We will lose our democracies, and I can really only echo Shoshana Zuboff's call that we urgently must "ban markets in human futures." We simply cannot accept that these influences are going to shape our work, our career, and our life opportunities.

A little word of warning here, because as we all talk about this, as we experience this, and as we realize that this is happening, we still have to acknowledge that 49 percent of the world's population still have no access to the Internet. We have in this crisis, where schools are closed, 463 million children across the world who are now not being schooled because they don't have access to this technology.

Education technology isn't solving this problem. Again, there is the call for us to avoid this technological solutionism. We need public investment in the digital infrastructures and not what is happening right now across the world, that the private tech industry is filling that void and offering the mobile mass against, forever keeping the data that is generated. These power asymmetries that are being created across the world are only going to embed themselves totally unless we turn the tides and start the public investment in the Global South.

Race forward. Where do we go from here? How can we put some relatively easy steps in place here for us to take control over these technologies, for us to shape the digital work, the digital society, as we best see fit? Again, yes, we must ban markets in human futures, absolutely. Until we get there, there are certain more low-hanging fruits that we could do.

Of all of these AI principles that are being adopted at firm level, at government level—the OECD's being the first and only intergovernmental principles on AI—they are great, but unless we really start putting flesh to the bone, so to speak, and turning principle into practice, then they remain words of good intent.

At work we need to start talking about: "Okay, how do we fulfill and actualize the principle of fairness?" Fair for who? Fair for management? Fair for the workers? The only way we can solve this is by bringing dialogue back into vogue, so to speak. We need to co-govern these algorithmic systems at work. We need to accept that dialogue is actually the way forward. I think all of us should urge the International Labour Organization to put in place a new type of convention on workers' data rights.

Across the world in many data protection regulations workers are directly exempt from them—California is one, Thailand is another, Australia is a third. So workers' data rights need to be improved so that these influences become flushed out, blocked, and stopped.

On the digital divides we really must work together to empower our brothers and sisters in the Global South. We need to bridge these digital divides responsibly between the rural areas and the urban areas, between the Global North and the Global South, and we cannot accept in any shape or form that the Global South is forced to give away an asset they don't even have control over themselves yet, and that is their data.

On automation, on job losses, again let's get the companies to be obliged to put in place, for lack of a better word, a "people plan." When they invest millions of dollars in disruptive technology, they should be obliged to invest in the skills and the competencies of their workers.

On skills, yes, as I said before, we need somehow to democratize access to this reskilling and upskilling. We cannot have an individualization of work yet an assumption that every worker can pay his or her own way.

But we also—and this is very important—need to start talking about the soft competencies that we have. There are lots of AI systems out there right now that are identifying skills gaps, but they never take into consideration whether you are the glue that sticks the organization together, whether you are the one of the workforce who has all the creative ideas. We seldom light on within that I am the one who has all my antennae out for the well-being of my peers. But it is probably precisely those competencies which are least automatable and therefore will grow in significance in our labor markets.

On union busting, as I said, this has to be made illegal. We have a freedom of association. We have the right to assembly, and these should be respected, and any form of union busting and their big dollars should be made illegal.

I could go on about my call for strong and inclusive labor markets. We have to realize that if we accept the precariousness of work for others, it very soon will boomerang around to us. I think we all should be aware of this, that: "Oh, yes, all our workers can work from home now. COVID-19 has proved that that's possible." We must ask: Will this lead to the end of the permanent contract? What is preventing companies from outsourcing to the global labor market and chopping up their jobs into tasks and putting that out there? What effects would this lead to in relation to our wages and working conditions?

I am going to stop there now. I have been going on and on, and I can imagine if all of this is a little bit overloaded, so over to you, Wendell.

WENDELL WALLACH: This was a truly excellent rant, and you left an awful lot of topics on the table for us to dive into much more deeply.

I see, Christina, that you more or less endorse this flood of AI principles that we have seen come along, and you state what so many have been saying, that now we have to operationalize them. From your perspective, in terms of worker rights, do you more or less see those principles, particularly the more endorsed lists, such as those from the OECD or those from the Beijing Principles that represent 1.4 billion of humanity? Do you feel that those lists are more or less sufficient if we can operationalize them, or do you feel there is something fundamentally missing in terms of our focus here upon worker rights and the ability to ensure that the digital revolution ameliorates inequalities rather than exacerbating them?

CHRISTINA COLCLOUGH: That's a great question, Wendell. I am actually curious to hear your answer to that, but let me give a little thought about what I think about these.

The OECD governments, lately endorsed by the G20 and five Latin American countries, have adopted these principles, and now we have to hold them accountable to them. If they don't put institutions in place, give the mandate to new authorities to be able to actually check at the company level or in society level how these principles are actually being respected, then they're almost worthless. Then we are giving way to this soft law from the bottom up—"As long as I do a corporate AI accountability report per year, everything will be fine"—a little bit like you saw the corporate social responsibility reports, which many company owners have said to me actually are just done for the sake of having to do them.

I really think no, we have taken a big step in actually adopting some of these principles, and now our governments have a huge task and an urgent task in building the infrastructure, the institutions, to actually make sure that they are accounted to and respected.

What do you think, Wendell?

WENDELL WALLACH: This is the challenge of posing a difficult question like this.

My concerns are very similar to yours. I don't know whether we are actually going to operationalize them, and I am concerned that what gets operationalized does not necessarily hold the feet to the fire of the deployers of these technologies. If it doesn't do that, then my real concern is that we are engaged in a period of ethics washing, where corporations embrace principles, but they don't really act upon them, particularly in regards to the conversation we have right now.

We are in this inflection point in human history where humanity is being transformed by these technologies, but very few people get a voice in deciding how the technologies get deployed and what kinds of major structural changes we're making to our societies. I am particularly concerned that we have bottom-up voices and that we have more participation in those conversations than we have today, but how we get from here to there is not so clear to me.

You talked almost pejoratively about soft law. I am a proponent of soft law because I think the speed of digital transformation undermines our ability to put in place hard law quickly enough, and therefore sometimes soft law is at least a good first step. For those who don't know that term, it is referring to standards, laboratory practices and procedures, insurance policies, and a whole plethora of mechanisms that can be useful, but they very seldom have any enforcement behind them.

I take it to mean that when you are talking a bit pejoratively about soft law it is that concern that without enforcement mechanisms it is hard to know whether we are going to get any effective governance at all. Am I correct on that?

CHRISTINA COLCLOUGH: Absolutely. No, you are. Wendell, one of the things that spooks me right now is how successful with their multimillion-dollar investment big industry and Big Tech are in lobbying politicians to not do their job.

What disturbs me mostly about this is that democracy is at stake here. I have read that lots of multinational Big Tech companies are doing quite good stuff, at least on the surface, in relation to checking the ethnicity or the ethical dimension of the new tools they are developing and so on, but when you then look at who is part of their internal governance board on this, there is not one single worker on there. This is why I keep on saying that when we look at the principles of transparency, auditability, or fairness we have to ask: Transparent to whom? Fair for whom? And there is no way that a junior legal compliance officer in the company can answer that question truthfully. Therefore, we need people around the table.

Yes, to govern this, to put these practices in place, to put the public institutions in place to actually enforce these principles will take time. But then maybe we actually deserve towards ourselves and our peers to hurry up a little bit slowly here.

WENDELL WALLACH: Let's say a corporation took you up on that and said: "Okay, we would like a spokesperson for user concerns." Let's say they go beyond that and say user concerns and worker concerns. How do you propose they go about figuring out who that member of their board could be? Where do they turn to for representation?

My experience has been that there is kind of a top-down paternalism where we see various voices representing workers, underserved communities, women, indigenous communities, and small nations, but it is largely the same people. Though I love some of the people who are in that role, there are others who I feel like are just placeholders. It seems to me we need to create some kind of network where we at least make available those who we think can engage in bottom-up representation but also will be trustworthy within the constraints, for example, that a board of directorship of a corporation would want.

CHRISTINA COLCLOUGH: Absolutely, and I really want to support you on the need to open up these conversations to bring in far more voices. I think a lot of the criticism has been raised around the white man (and now and again a woman) discussing these things is very valid.

But in the workplace, if we look at how could we govern some of these technologies—the monitoring and surveillance of workers—in the workplaces, that's why we have unions. You have a lot of shop stewards as they are called in some parts of the world, staff reps in other parts of the world, who are there elected to represent the wider group of workers or employees. This is another reason why to join a union, to actually have that seat at the table and have that representation, which is a democratic representation, so if you are not satisfied with what your staff reps are saying, then it is open for discussion, of course.

In relation to the wider world, how do we make sure that we are inclusive and diverse? I think this is the job of all of us if we are ever hosting an event to make sure that we are inviting a diverse group of people but also that we hold our governments and our companies accountable to whatever tool that they are developing or have about making an impact assessment and not just an impact assessment along narrow lines but on broader lines. To be able to do that they will need to engage in dialogue with how could this affect indigenous groups, underserved communities, or the Global South, and so forth. To think that we have all the answers as office people set in Silicon Valley would be very wrong. I am being a bit harsh here, but I hope you understand what I mean.

But again, this is about hurrying up slowly to get the mechanisms in place and honestly to break some of the myths that have been created. A couple of years ago I gave a speech, I was actually invited by one of the Big Tech companies. They had never had a unionist inside their doors before, and when I was presented I could hear somebody in the room go, "Ugh, a union person."

I picked up on that and said: "I heard that. Let's talk afterwards."

Afterwards, he was like, "I didn't know the unions had this point of view."

I think a lot of this conflict, a lot of the antagonism between us is built on myths and built on misunderstandings that we really should sit down and talk about.

WENDELL WALLACH: When I look at these governance concerns from corporate all the way to national governance, one of my concerns is whether we have enough people who understand the issues who can serve those representative roles. I am always in great admiration of your understanding of these issues, but I wonder as you deal with unions around the world whether you feel their leadership has an adequate handle on what is taking place that they can give effective expression to the concerns that you are perceiving or whether they are taking the kinds of initiatives with you and others from, for example, the ILO, who do understand what is taking place to educate themselves, to educate their members, to really upskill in what my co-director of this project, Anja Kaspersen, calls "digital hygiene," which she thinks of as going more beyond this political voice but even just ensuring that everyone in the world community understands what they are subjecting themselves to when they enter into the digital workplace or even social communications through digital means.

CHRISTINA COLCLOUGH: We have all been kept in the dark on this. The public awareness building around the digital economy, this data extraction, has only started within the last year and not profoundly so. Okay, the Cambridge Analytica/Facebook scandal got people a little bit on their toes, but everybody listening to this above the age of I would say 40 sleepwalked into this situation. We never truthfully asked, "Why is Facebook for free or why is Google Translate for free?" We got seduced by the magic of it all and forgot to consider that there is really nothing called a free lunch.

Going back to the unions, the interconnections, the link between the digital infrastructure, data inferences, workers' data rights, artificial intelligence, and algorithmic systems is very, very complex, and with a lack of public awareness raising and building I would say there is ignorance—not negatively said—around the existence of these systems and also the potentials of digital technologies.

I was recently at an event with a group of technologists who were telling me what they were working on, and I was like, "Is that possible?" We can't even imagine what some of these technologies can do, and when you then are a lay union person or a union leader and you have never met these technology people, then of course it is very difficult to build a critical voice.

But more and more people and more and more unions are waking up to this but then facing the enormous challenge of: What now? How do we become more digitally savvy? What responsible tools are there we can use which are not just further data-extraction tools? How can we push back and create a new digital ethos which is probably more responsible and has ethics of a different standard? That is going to take enormous time, and I wish there was more public awareness to this and more funding towards work like this. Otherwise, these power asymmetries are just going to continue to grow.

WENDELL WALLACH: You gave one anecdotal example, but I am wondering more broadly. You tend to be one of the more tech-savvy people speaking for a massive portion of humanity. Do you find that the corporations or even the AI researchers listen to you, or do they tend toward dismissing your perspective because they don't feel you have the expertise that they believe is necessary to give informed advice?

CHRISTINA COLCLOUGH: Do I feel that everybody pushes me aside? No.

I think if people disregard what I say, it's because I have touched a sore spot. It is because this whole recognition of me as a worker or our own vulnerability in relation to these digital tools—it is hard to admit that it is not just everybody else; it could also be me.

That said, in the political circles, in the OECD, in the United Nations, and in the Global Partnership on AI and all these other places, when you raise your hand as your workers it's Oh, it's her again, or they have just gotten to know me as this devil's advocate, but I am not sure how much I say actually shifts something inside them, which is a shame. As you said, we don't need another circus. We actually need to open our minds.

WENDELL WALLACH: My colleague Alex Woodson has been monitoring the Chat.

Alex, perhaps you can take over and tell us what kinds of questions have come that Christina should respond to.

ALEX WOODSON: Thanks, Wendell.

The first question is from Adiat Abiodun: "How can AI be utilized in a country with a low level of technological advancement?"

CHRISTINA COLCLOUGH: Let me turn that into a [different] question: Should AI be adopted in a country like that? Again, what is AI? This is another one of the big mistakes that we make—we call everything that is technology AI at the minute, which is not really true.

I think the developing economies could maybe leapfrog over some of the many mistakes that we have made in our parts of the world and start by saying: "Okay, technology is coming. Facebook, Google, and the rest of them are offering to build a mobile mass so all of our citizens can have access to the Internet."

But then they need to already start asking: "Do we have the institutions in place to govern this? Do we know what demands we should be putting in relation to the data control, data access, and so forth?"

Again, I think it would be fantastic if we were open towards one another and actually discuss: "How can you utilize that? A lot of people before you, a lot of countries before you, have made horrible mistakes, and what demands could you actually put on the table to ensure that your digital industrialization or digital transformation serves your people, your businesses, and your society?"

WENDELL WALLACH: Alex, before you get to the next question, let's also prompt those who have not put in questions that we may still have time for your questions, so do add them to the Chat.

ALEX WOODSON: This next question is from Lorenzo Belenguer: "How can explainability be made a basic principle before an algorithm is put to use?"

CHRISTINA COLCLOUGH: Explainability on what dimension—the explainability of the outcome of the algorithm, of the instructions to the algorithm, or of the data sets that have been used to train the algorithm, and so on?

One of the things I am working on now, which I am still refining as we go along, is if you imagine a new digital tool was being developed, we should require that the developer do an impact assessment which includes human rights, which includes workers' rights, and societal things, and which includes what data sets it has been trained on, what the instructions are, the order of the instructions, and so forth.

That log—if you imagine it as a log—follows the tool. So if I am a company in Germany and I buy this tool, I receive that log, which will then help me do my own governance. So when I use this tool I can check for intended and unintended outcomes and so forth, that I can say—let's say it's an automated hiring tool—"This tool that has been designed to identify the ideal engineer in this company does not really fit into a German context. How can I adjust the algorithm or the instructions to make this more explainable and more transparent and adjustable?"

That is a possible way forward, but I think that is a very long discussion, explainability on what level?

Wendell, I would like to hear your view on that.

WENDELL WALLACH: Let me add a little bit to that because this is something I have talked about quite a bit over the years.

It may be that for a lot of algorithms we are talking about deploying explainability isn't that important. If you are looking at a massive amount of data and you are going to look at the output to make decisions about what kinds of research experiments to put in place, you don't necessarily need any kind of explainability, though the researchers should be sensitive to whether there might be, for example, biases in the input data and factor that into their judgment about how seriously to take the output that is being put forward. That is one level.

At the other extreme, we have mission-critical applications. In that circumstance we need to be actually rejecting the deployment of mission-critical algorithms unless somebody is willing to take responsibility for what could go wrong should those algorithms fail. At the very least, we should have the forensic capability to look back after the fact and see what went wrong so that does not occur again. That might be practical for a self-driving car, but that is not practical when you are talking about an algorithm that might actually launch a munition. That is just not acceptable at all.

The problem is that we have these very different levels or different kinds of deployments, and they have different demands upon what kind of explainability should be there or what kinds of testing regimes and certification regimes need to be put in place before the algorithms get deployed. We really have not done the work of even setting up the discrimination for those categories, let alone put in place mechanisms to ensure that takes place.

CHRISTINA COLCLOUGH: Wendell, can I add to that? Yes on mission-critical; I totally get you there.

For example, for an autonomous system deployed in a workplace at any and all times management has to be the responsible actor there. They are putting this in place, and it might have discriminative bias or whatever intended or unintended effects.

But at the moment—and this has really spooked me—before lockdown I was giving lots of speeches, also to lots of employers, and I asked them: "Do you have governance mechanisms in place? Do you know how to unpack the algorithm?" They all said no.

What they are essentially saying, though, is that they are giving away power to a thing, to a proprietary piece of software coming in from the outside and that is actually dictating because they let it dictate what happens inside their company. This is very, very dangerous.

WENDELL WALLACH: Some of us have recommended, Christina, that corporations need to put in place AI ethics offices and review boards to catch this kind of thing and to deal with it on every level, from the engineers who build the algorithms to being able to report to the board of directors if something is being deployed that actually could put the corporation in a liability situation.

Do you support that? If so, do you have some proviso or anything that you think those ethics boards or ethics officers should be doing to protect worker rights on the job?

CHRISTINA COLCLOUGH: Number one, the workers have to have a seat at the table. That is really important.

Number two, I think we have to look at the education of our engineers, our computer programmers, and all of those developing these tools. My stepson is now graduating as a computer scientist from university, and he has had two months of the methodology of science. He has not had any teaching on ethical or human rights concerns.

We need some templates. I think we need some model cards around what questions should we be asking, how do we find responses if we don't understand the systems, and so on. But there is a lot of training that needs to be in place for ethical review boards to even have a genuine role in all of this.

ALEX WOODSON: I think we can get to a couple more questions.

This one is from Sreekanth Mukku: "Thanks for laying out the workers' futures landscape brilliantly. My question is: in the Global South, getting work or employed itself is one of the biggest challenges. With low education and skill levels, high unemployment rates compel huge populations not to fight for rights and seek social protections. How do we deal with these challenges at the policy level from both national and global standpoints?"

CHRISTINA COLCLOUGH: That is another excellent question which also goes a little bit beyond the topic here, but of course the degree of informality or informal work in the Global South is just unacceptable. In some countries up to 98 percent of all workers are in the informal economy. They have in principle the rights, but they have no means to claim those rights.

Again, let's look at it. Our throwaway economies in complex supply chains, maybe there should even be a label inside everything we buy that says what has been the supply chain of this piece of clothing, what rights have the workers had, or something along those lines because we so easily can turn a blind eye to the conditions of certain workers that enable our goods to be sold so cheaply in the Global North. There are lots of mechanisms between business and human rights that are being put in place here, but of course I think we have to start with ourselves on that issue.

On the informality of work, yes, this needs to be a global pressure. I think we have to understand in the Global North that the exploitation of workers and their working conditions in the Global South comes at a price that is too high to justify our five-dollar jeans.

ALEX WOODSON: This question is from Carnegie Council's Grady Jacobsen in Somerville, Massachusetts: "How can we set up more concrete metrics for the accountability of private sector companies and corporations to guard against ethics washing and surface-level changes that fail to ameliorate the effects of AI on workers' rights? Are there any examples of accountability structures that have worked or are working?"

CHRISTINA COLCLOUGH: Grady, that's great.

You can say the collective agreement is an accountability mechanism. We have in many countries, especially in Europe, the right of codetermination. So workers sit on company boards; there have to be bodies for dialogue, consultation, information, and so on.

As long as we accept that this power asymmetry can be as big as it is, as long as we accept that in the United States it is so difficult to form a union—you need 50 plus 1 percent to form a union—yet we have all this union busting going on, then we are not going to create the best, most autonomous accountability system possible, and that is a collective agreement between workers and employers.

On a more macro scale, on the nation-state scale, are there accountability measures? There is partly in the General Data Protection Regulation (GDPR) in Europe, where the national data authorities have a much greater mandate than they had before. Here a worker or union can file a suspicion to the national data authority. They are then obliged to investigate whether there has been a breach of the Data Protection Regulation or not. And vice versa, of course; anybody can file a complaint or suspicion. That is one way of holding companies and organizations accountable to the laws of the GDPR.

But on workers' data rights, as I said before, they are very weak across the world, and this I think is no coincidence, subject to heavy industry lobbying. In the California Consumer Privacy Act the amendment to exempt workers was partly accepted until 2021. Why? We can all speculate on why workers seem to be in such a weak position, but this has to be remedied and far more accountability measures put in place.

One of the things—in continuation of our discussion before—is that the national data authority could have an expanded mandate to see these governance logs of algorithmic systems that both developers and deployers of these technologies should be writing and should use.

WENDELL WALLACH: For those of our listeners who don't know what the General Data Protection Regulations are, they were passed by the European Union. They have very high standards in terms of what rights users have in regard to how their data is applied. It even gives a right to withdraw from participating in this data collection.

The difficulty is how strictly this will be implemented by more rigorous law or upheld by the courts, but this very high standard has been adopted by the State of California and does function as a de facto guideline: Every time you give a website permission to collect your data—they are asking for those permissions now—that is all because of GDPR.

CHRISTINA COLCLOUGH: Yes.

ALEX WOODSON: I think we have time for one more question. This is from Bev Hall: "In a supply chain context we are seeing distinct shifts to tech solutions. How would the power imbalance be challenged through a value chain which may extend over many borders/countries?"

CHRISTINA COLCLOUGH: Again, we have the United Nations Guiding Principles on Business and Human Rights. That would be an excellent place to start. I think the more cooperation we can have amongst our nations and amongst experts across our nations raising the issues that we should be aware of so there can built a pushback or list of demands from countries down through the value chain.

I think we have to be very careful right now because COVID-19 discussions are leading many industries to say they are going to break their value chain and supply chain and automate more and lower the risks, so there will be even more need for us to cooperate there. But the tech solutionism I think in this Global North/Global South, developed/developing world conversation has a lot to do with digital colonialism, and this is something that we really have to avoid, that these norms and values embedded in systems built mainly in the United States and China are not diffused through the world through very opaque processes and lack of transparency and then as such as a form of colonialism. This I think we have to be very aware of.

But cooperation, flagging things, and leapfrogging some of the mistakes that we made in the Global North I think would be a very strong way to push back on some of this colonialism that is taking place.

WENDELL WALLACH: What about what is happening in the digital trade negotiations at the World Trade Organization (WTO)?

CHRISTINA COLCLOUGH: Oof.

WENDELL WALLACH: Is that helpful or unhelpful?

CHRISTINA COLCLOUGH: Very, very unhelpful. I would love to be able to see all of you on this call and ask, have you heard of the e-commerce discussions happening on the fringes of the WTO? Not many have. But essentially let me just give you five points of what will happen if these ever get adopted as new digital rules.

WENDELL WALLACH: I can give you one minute for your five points.

CHRISTINA COLCLOUGH: Okay, I won't give the five points, but what they literally will do is lock the Global South into a very path-dependent trajectory where data has to be flown freely out of your countries by multinationals with no legal presence or physical presence in your countries, and this will be extremely exploitative. So please do have a look at those negotiations.

WENDELL WALLACH: I probably should not have brought up such a deep question as we are finishing up, but I think it is only by way of wetting everyone's whistle in terms of the breadth and depth of concerns that come up within the topics we have been touching upon today when we are looking at worker rights within the digital economy.

Again, first of all, let me just thank you, Christina. I think everyone listening in will agree that this has been tremendously informative and clearly touched upon topics that I had not fully considered, and I am sure that is true for all of our listeners.

This is, as I mentioned, the second in our series. There will be a podcast in which Anja Kaspersen will be with Doreen Bogdan-Martin from the International Telecommunications Union, and they will be talking about the history of digital access. That is such a big issue right now when we have, for example, 450 million schoolchildren who have no digital access at all and therefore no access to education during this COVID-19 crisis.

In December we will have another webcast with Anja Kaspersen and myself, where you will get to meet her, but also we will discuss a little bit more in depth about what we perceive to be the issues within this project and some of the roads we hope to go down.

In addition, we are likely to have some podcasts presenting some of the topics that were to be come up at the International Congress for the Governance of Artificial Intelligence, which has been tabled because of the COVID-19 crisis.

Again, thank you ever so much, Christina, and thanks to all the crew at the Carnegie Council for Ethics in International Affairs who have made this possible. For any of you who would like to tune in or make your colleagues aware of this podcast, we have just put up our website at www.carnegieaie.org. That will give you access to our podcast, to transcripts of the podcast, and to other information and data about the project. This website has just gone online today, and we hope to populate it with more and more information.

Again, thank you for tuning in, and thank you, Christina.

CHRISTINA COLCLOUGH: Thank you, Wendell. What an honor. Bye-bye, everybody. Be safe.

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.

JAN 23, 2024 Podcast

When Science Meets Power, with Geoff Mulgan

In this special episode, Senior Fellow Anja Kaspersen speaks with University College London's Professor Geoff Mulgan on the trends shaping technology's impact on society.