Logo

Future Politics, with Jamie Susskind

September 14, 2018

Detail from book cover.

JOANNE MYERS: Welcome to this podcast, which is coming to you from the Carnegie Council in New York City. I'm Joanne Myers, director of Public Affairs programs here at the Council.

In a few minutes I'll be speaking with Jamie Susskind. Jamie is the author of Future Politics: Living Together in a World Transformed by Tech. He is a practicing barrister and a former fellow at Harvard University's Berkman Klein Center for Internet and Society. He studied history and politics at Oxford and lives in London.

Jamie, welcome to this podcast.

JAMIE SUSSKIND: Thanks for having me.

JOANNE MYERS: As the title of your book indicates, one of the most important issues of our day, one that is taking place not only in real time but one that will transform politics and society, is digital technology, which of course raises many questions. But before we address the future in a world transformed by tech, could you tell us what you mean by politics?

JAMIE SUSSKIND: When I think of politics I think about the collective life of human beings, why we live together, how we order and bind our collective life, and the ways in which we could or should order and bind that life differently. That's quite a broad definition, and I take it deliberately because I think if we limit politics just to parliaments and legislatures and procedures that would be typically associated with politicians and the like, then we can miss new and important forms of power which can affect our lives, affect our democracy, our freedom, and social justice in just as profound a way as the traditional idea of politics do. So I look at it broadly, and that's the focus that I take in the book.

JOANNE MYERS: But you also said that you predict that we will need a fundamental change in the way we think about politics. What do you mean by this?

JAMIE SUSSKIND: We're witnessing the rise of new and strange forms of power, most of which come through technology. I think what that's going to require is a new approach for us to think about how we hold that power to account.

Technologies exert power over us in a number of ways. Sometimes it's by setting rules and the codes that we have to abide by, so when we take a drive in our self-driving car, it won't drive over the speed limit, it won't park illegally, and it will automatically pull over for a police car, and so forth. That's a very different world from the one in which we currently live, and the technology itself exerts a kind of power over us.

Other times technologies exert power by controlling our perception of the world—filtering the news, filtering our search results, choosing what we do and don't see of the outside world, what's important, what's not, what's true, what's false. That's a kind of power in itself because it sets what's on the agenda. It tells us what to care about and what to ignore. As anyone who has ever run a meeting knows, the most powerful way to keep something from ever being changed is to keep it off the agenda altogether.

Additionally, technology is increasingly gathering an enormous amount of data about us. In a sense, they watch us, and other people watch us through the data that is gathered and re-parceled and sold on. We change our behavior when we know that we're being watched. We discipline ourselves, and we're less likely to do things that are perceived as sinful or shameful or wrong.

Also, the more that is known about us, the easier it is to influence as we saw in the last election with Cambridge Analytica microtargeting voters with messages tailored to what the algorithms perceived that those particular individual voters would like to hear, perhaps even if it wasn't the truth.

So tech companies, as we project into the future and tech companies are able to exert these kinds of powers but in a way that is miles more profound than it is just now with increasingly capable systems of artificial intelligence (AI), an almost exponential growth in the amount of data that is being gathered, and technology becoming integrated into the world around us—in the physical world as well as just the world of cyberspace—I predict that in the future to think of the state or political institutions as we typically think of them as the only political entities that we need to hold to account, will become outdated.

JOANNE MYERS: I guess it's going to affect our freedom and how we establish and maintain justice, but in the past in a democracy the strong usually dominate the weak by exerting power. So what will become of democracy?

JAMIE SUSSKIND: What will become of democracy partly depends on what we want to make of it. We already know that the deliberative aspect of democracy—how we share information and reach rational conclusions— is being changed profoundly by the Internet and by social media, whether it's through fake news or filter bubbles, where we become polarized into little groups of people who agree with us. But if you look further into the future, we're going to face a whole host of new and interesting philosophical questions about democracy, and I think it would be naïve to assume that the current system is the last and best form. For instance, we may come to ask which areas if any of public policy are apt to be decided by systems of artificial intelligence.

Or we may ask whether we should be able to vote directly on things through apps in our smartphones or delegate our vote in such matters to an artificial intelligence to vote on our behalf or to someone else through a system of liquid democracy, so if there's a vote on the local hospital, I can delegate my vote to a consortium of doctors and nurses. The technology is going to exist to enable this to happen, and democracy could look very different from the ticking in a box every couple of years that we're used to now and that we've had for quite some time.

So I think there are interesting philosophical questions surrounding the future of democracy. I think a lot of people have listened to what I've just said and been horrified by it, but that's a debate we're going to have to have. By and large, I think that those who control the most powerful technologies, whichever system we choose, are going to increasingly control the rest of us.

Think about the technology of bots, for instance, which is playing an increasing important role, a prominent role in online discourse. A bot is an artificial intelligence that currently embodies itself as a social media account, so they might speak to you on Twitter or argue or debate with you. They're becoming incredibly sophisticated. [In June 2018] a chatbot here in the United Kingdom was said to have passed the general medical examination obviously better than many of the students who didn't, and that's just an automated system that learned from enormous amounts of data to answer the questions asked of it.

In the future, when these systems grow in power and capability and they're not just disembodied lines of text, but they have faces and they're capable of reading our emotions as well through sensors and the like, then definitely a question could be asked about the future of human deliberation there as well. What place is there for us in a democracy when it's increasingly difficult to have your voice heard over the voice of artificial entities that might be employed by pressure groups or political parties or the rich and powerful?

I think there is a whole host of questions about the future of democracy that we've not yet engaged with, and these are some of the issues that I try to look at in the book.

JOANNE MYERS: Do you think choice will become eroded during this process?

JAMIE SUSSKIND: When you say "choice," do you mean choice of outcomes, like policy outcomes?

JOANNE MYERS: Policy outcomes, yes. Let's go there first. Let's talk about policy outcomes, for example, and just about the way you make decisions, your choice as a citizen, or will the algorithms control your choices and direct you in one direction without having the ability to make choices in others?

JAMIE SUSSKIND: It's a great question, and the answer I believe is yet to be determined. You can definitely see a world in which an increasing amount of decisions are made for us by algorithms whose nature is unclear and concealed from us. Increasingly important decisions about our lives, whether we get a mortgage, whether we get insurance and on what terms, whether we get a job—72 percent of résumés are now not seen by human eyes. We trust algorithms to make all kinds of important decisions already, and you can see that trend growing in the future.

Alternatively, technology can be used to broaden our minds and to expand the number of options which we understand to be available to us. You could well perceive an app or a system that explains to you the political issues of the day and based on the data that it has about you what might be in your interest as a political outcome. You can definitely see technologies being used to bring people closer to the people in power, to hold them to account as they sometimes do on Twitter and the like, which is obviously a very young technology.

I think—and I argue in the book—that how we structure the public sphere and the technology in the public sphere in the future is very much open to debate. However, if we do nothing about it and we just let the technological system develop according just to market logic, which is basically what we do just now—if things sell and generate money, they succeed, and if they don't, they don't—if we leave technology to become a purely economic tool or appendage, then we shouldn't be surprised if the outcomes are not necessarily best for democracy or best for justice, because those two things aren't always aligned.

JOANNE MYERS: This implementation of automated processes does create unique challenges for both law and for policy, so is there a role for lawyers and public interest practitioners in ensuring the lawful, ethical, and conscientious implementation of data-driven processes?

JAMIE SUSSKIND: I'm biased as a practicing lawyer myself, but I do think that. Lawyers are great at making work for themselves, but I think it's a society-wide task, and ultimately lawyers only work with the laws that exist at a given time. Some do so in a more conscientious way than others, but I'm slow to criticize anyone for trying to work within a framework of laws in a way that's most advantageous to them, as companies often do.

With law it is often for politicians and the public to keep it in the state that it should be, but certainly lawyers who operate at the sharp end of the regulation of society have a duty to make sure that the law is upheld. Some would consider themselves to have a duty to make sure that the laws where necessary are changed. But that's not necessarily the primary job of the lawyer. I think it's the job of the citizen and the politician.

JOANNE MYERS: But if technology becomes more and more powerful, there will be a need to regulate it, and it will be more than—we hope it won't be the machines regulating machines, that somewhere human agency will kick in and set some type of regulation.

JAMIE SUSSKIND: Indeed, although as we increasingly see, for instance, in the financial sector, where lots of trading is done by algorithms—it's automated and happens at lightning speed in response to changes in the market—some of the best ways of regulating those algorithms are themselves other algorithms to detect irregularities and check compliance with the law.

Sometimes machines move so fast that mere human oversight isn't going to be enough. That said, if there is ultimate human oversight, that's what we should be looking for. My book isn't really about AI overlords taking over the world. That might happen in due course, but there is a very interesting and important period before that happens where technologies are still under human control, under the control of specific humans, and we have to think about how we regulate them.

At the same time—and I know this will have particular resonance in the American tradition—giving too much power to the state, allowing the state to co-opt the power of technology, is not always itself a wise thing. There are risks associated with bringing more and more powerful technologies under the control of the state just as there are risks with leaving them in the hands of private companies. That is something that will have to be balanced this generation.

JOANNE MYERS: So you don't think we underestimate the role that algorithms play in society. You seem to indicate that they will self-correct at some point.

JAMIE SUSSKIND: No, I don't think they'll self-correct. I think that algorithms are under the control ultimately of their human owners and operators, even if those owners and operators sometimes don't understand how they work.

I would argue completely the opposite. We can't be complacent about which areas of life algorithms extend into and how those algorithms function and the choices and decisions and values that they bring to those functions. I argue for civic vigilance against technology while at the same time acknowledging all of the awesome benefits that it has brought to our life.

JOANNE MYERS: You do talk about algorithmic audits. You are I guess then in favor of that, or where do you come out on that issue?

JAMIE SUSSKIND: I certainly don't think it's going to be sustainable as algorithms are used more and more to make decisions that are profoundly important to people's lives, whether because they affect the democratic process or because they restrict or expand their liberties, or they distribute things of importance around society, be it mortgages, insurance, jobs, and the like. I don't think it's going to be sustainable as more and more algorithms come to dominate more and more important aspects of human life for those algorithms to remain commercially secret, black boxes, jealously guarded by their corporate owners, who leave as it were the citizen at the mercy of whatever decision is made behind closed doors.

At one end, there are those who argue for full algorithmic transparency—everyone should be able to know how Google's algorithm works or how Facebook's news-ranking algorithm works. Algorithmic audit is an interesting idea because what it would allow is a trusted third party to lift the bonnet of an algorithm without revealing it to the outside world and confirm to the outside world that it meets some standard of nondiscrimination or of functionality just like an auditor would check your accounts for irregularities and the like and give them a Kitemark or a mark of approval.

Algorithmic audit is just one of many along a spectrum of policy options that would deal with what I think is going to be an important public policy question, which is: How do we deal with these algorithms that increasingly affect our lives?

JOANNE MYERS: Just going forward, just to sum up this discussion, what are the main developments that we should be concerned about?

JAMIE SUSSKIND: There are three major technological developments that to my mind are transforming the way that we live together. The first is increasingly capable systems, digital systems that are able to do things which we previously thought required the creativity or the genius of human beings. You see it in a whole host of endeavors, whether it's lip-reading or transcribing human conversation or reading emotions or playing chess or playing go. In an increasing number of endeavors, digital systems are able to do things as well as humans or better than them. That's a trend that to my mind is not going to slow in the next few years; it will increase.

The second is what I call "increasingly integrated technology." In the past, a computer was the size of a room, and you had to walk into it if you wanted to interact with it, as Luciano Floridi says, and now we live in the age of what someone else has called the "glass slab." But in the future technology won't resemble either of those things. It will be more seamlessly integrated into the world around us, into our architecture, into our appliances and utilities, into our furniture and clothes, even in our bodies in the form of nanomedicines and the like. The distinction between online and offline, cyberspace and real space, will become a lot less clear. That's the second major transformation.

The third, which rolls in with the first two, is the increasingly quantified society, the idea that more and more of our lives—our utterances, our movements, our preferences, passions, emotions, feelings, thoughts, views, and activities—are recorded as data caught and captured by the technologies around us and then sorted and processed and kept in permanent or semi-permanent form.

Between all three of those changes—increasingly capable systems, increasingly integrated technology, increasingly quantified society—we're moving into what I call the "digital lifeworld," which is basically a different stage of human existence; not AI systems stomping humanity under their awesome boot, but it is a world in which there are incredibly potent technologies that are everywhere basically, and they're always watching us.

Those are the trends that I think are going to transform the way we live together.

JOANNE MYERS: There's no doubt that digital technologies are changing human behavior, and we can only hope it's for the better. I guess the real question is whether democratic societies can harness the energies released by the information revolution without being overwhelmed by the disruption they bring.

I thank you for challenging us to think about the future before it's too late. The purpose really is to start the conversation and begin the discussion of our future so we can exercise some control over it.

Thank you, Jamie, for being with us.

JAMIE SUSSKIND: Thank you so much.

blog comments powered by Disqus

Read MoreRead Less