To Save Everything, Click Here: The Folly of Technological Solutionism
To Save Everything, Click Here: The Folly of Technological Solutionism

To Save Everything, Click Here: The Folly of Technological Solutionism

Apr 16, 2013

TV Show

Highlights

Very soon, "smart" technologies and "big data" will allow us to make sophisticated interventions in everyday life. Technology will create incentives to get more people to do the right thing. But how will this affect society, once political and moral dilemmas are recast as uncontroversial and easily manageable matters of technological efficiency?

Introduction

JOANNE MYERS: Good morning. I'm Joanne Myers, and on behalf of the Carnegie Council, I would like to thank you all for joining us.

The subject of this program, technology and risk, is one of the core themes of the Carnegie Council, and we will be addressing it as a lead-up to our Centennial in 2014. So we are especially pleased to welcome back Evgeny Morozov to participate in this Public Affairs Program.

Today he will be discussing his latest work, entitled To Save Everything, Click Here: The Folly of Technological Solutionism. On his previous visit not so long ago, Evgeny discussed his award-winning book The Net Delusion, which was timely corrective to the notion that the Internet could prove a game changer in the struggle to overthrow authoritarian regimes. At that time he raised a question about whether the increased use of the Internet and technological innovations such as Twitter, YouTube, and Facebook had in reality brought increased democracy to those under authoritarian rule, or did these new cyber tools actually make things worse?

His comments have proved to be prescient in that the so-called savvy leaders of the Egyptian revolution who used their cell phones to organize the Arab Spring have given way to the more established and better organized Muslim Brotherhood.

In many ways, today's discussion takes us even further than the earlier one, as Evgeny now wants us to take a broader and more critical look at technology and to follow his lead as he questions whether the urge to find Internet-based solutions to problems that either don't exist or are only likely to fester is a good thing. It is what he calls "solutionism." "What if," he writes, "technology is not the answer to all that ails us? What if we defer to certain data without understanding its limitations? What if more data or big data, as it is now known, isn't necessarily better data?"

You might ask, will large-scale searches help us to create better tools, services, and public goods, or will it usher in a new period of privacy incursions and invasive marketing? Will data analytics help us to understand online communities and political movements, or will it be used to track protesters and suppress free speech?

In To Save Everything, Click Here, Evgeny raises these issues and others as he ponders how solutionism and this Internet-centrism will affect our society once political, moral, and irresolvable dilemmas are recast as uncontroversial and easily manageable matters of technological efficiency. In the end, he argues for a new post-Internet way to debate the moral consequences of digital technologies.

While plenty of books extol the technical marvels of our information society, To Save Everything, Click Here is an original analysis of the myriad searches, clicks, queries, and purchases which we all make. He implores us to be more skeptical, but a skeptic who, while searching the web, keeps an open mind.

Please join me in welcoming our guest, one of the most respected voices among today's cyber philosophers, Evgeny Morozov. Thank you for coming back to visit us.

Remarks

EVGENY MOROZOV: Good morning. Thank you so much for this introduction, which is very detailed. I feel that you have given away most of my talking points. Now I have to improvise.

As you have heard, my first book tried to tackle the temptations that come to a lot of policymakers, particularly those in the foreign policy community, the temptations of enrolling Silicon Valley companiesFacebook, Google, various social networking platformsin trying to promote democracy and freedom. What I tried to do in my first book was to understand what some of the costs are of relying on the likes of Facebook and Google in promoting democracy in places like China, places like Iran, places like Russia.

As someone who comes from Belarus and who spent some time working in the nonprofit sector using some of those tools to promote democracy, I became aware that there are all too many costs of using such tools. They have to do with surveillance. They have to do with propaganda. Lots of bloggers are being paid by governments to go online and promote talking points of the governments and to commit cyber attacks. I tried to understand what it is that we're missing as we get excited about such tools and technologies.

What I tried to do in my second book, the one that just came out last month, was to also try to understand the limitations and the costs of relying on Silicon Valley and of relying on platforms like Google and Facebook, but to solve very different kinds of problems. So I shifted my attention from authoritarian states, from the likes of China and Iran and Russia, more towards liberal democracies and tried to understand what role Silicon Valley sees for itself in solving big problems, whether it's obesity or climate change, and what roles policymakers see for Silicon Valley in helping us to solve those problems.

The argument that I'm making is as follows: Two trends have happened in the last 10 years that I think will change the problem-solving environment, if you will.

Those two trends are: We have sensors that have become very cheap and that have become very small and can now be embedded in almost anything. You can embed a sensor in any single artifact in your household or in the built environment. That's why we have all of those smart technologies around us. You may have heard that there are more and more technologies that are called "smart."

We have smart toothbrushes that can monitor how often you brush your teeth and then report this data to your insurance company or to your dentist. Why? Because there is a sensor in there which is both capable of connecting to the Internet and which can understand what it is that you do with the toothbrush. So objects, in some sense, become aware of what they are for. They understand their own purpose.

By the same logic, we have smart forks. As you may have seen earlier this year, they were presented at the Consumer Electronics Show in Las Vegas. Smart forks monitor how quickly and how fast you are eating, and they will tell you that you're eating too fast if the sensor built into the fork realizes that that's the case.

There are many other applications that are also called "smart" that more or less rely on sensors to understand what it is that they are for. You have smart umbrellas that, thanks to sensors built into them, can check the weather. They know that it's going to rain in the afternoon, so they will tell you, "You need to fetch me" before you leave the house. Some kind of blue or green light will turn on in the umbrella itself.

The example of such smart technologies goes onsmart shoes that know when they are about to get worn out. There are all sorts of smart devices and gadgets right now that are either in development or that are already out on the market.

The key change that I'm driving at is the proliferation of sensors. The fact that you can now build a sensor into a device can make the device aware of what it is for and it can provide feedback to the user, and thus change user behavior in one way or another. If you think about it, it allows for what some behavioral economists and policymakers call a nudge. It becomes possible to change the user behavior by somehow telling them some information that they didn't know before. So in the case of the smart umbrella, the nudge is that some green light will go on the umbrella and it will tell you that it's about to rain, and here the behavioral intervention would be that you would take the umbrella and thus wouldn't get wet.

The big policy idea here is that it becomes possible to steer consumers in one way or another. So this is one big change, the proliferation of sensors.

Another change, which mostly has to do with the fact that we are all now carrying smartphones and that we're all now present on social networks, is that all of our social circles, all of our friends, have, in one way or another, become portable. Basically, whenever we engage in interactions with each other or with built artifacts, with household devices, or when we engage with social institutions, it becomes possible to mediate our interaction through our social circles.

Just to give you a couple of examples, there is a very new app that a lot of people in Silicon Valley are excited about called Seesaw. What that app allows you to do is, whenever you are faced with a tough choice and you don't know what dress to buy or what latte drink to order, you can immediately poll all of your Facebook friends through that smartphone app and they will tell you which one they prefer. Then you can choose the one that they have recommended to you.

We can think about it from a philosophical perspective. It's an app that tries to minimize the pain of decision-making. Instead of buying a dress and then getting disappointed that all of your friends hate it, you can actually minimize some of the pain by preemptively asking them before.

But what interests me in this is that it becomes possible to basically add a secondary layer. It becomes possible to add the information, the feedback, from your friends, from your entire social circle, to your decision-making.

To give you an example that brings those two trends togetherthe proliferation of sensors, on the one hand, and the proliferation of social circles that have become portablelet me give you an example of the so-called smart trash bin. It's a project called BinCam built by designers in Germany and Britain. Basically it looks like a regular trash bin that you have at home in your kitchen, but it has a smartphone built into its upper lid. Whenever you close the trash can, it takes a photo of what you have just thrown away. Why does it do so? It does it because the photo is then uploaded to an Internet site called Mechanical Turk. It's a site that is run by Amazon, where freelancers can get paid to perform various tasks that computers cannot do yet.

In this particular case, there are a bunch of people who are paid to analyze your recycling behavior. They are being paid to analyze whether you are recycling things correctly or not. After that analysis has been done, the picture is uploaded, together with the analysis, to your Facebook profile, where you enter competition with other Facebook friends. Whoever wins the most points becomes the winner of the recycling competition.

That might seem like a very bizarre and surreal example, but it's a real project built by serious designers concerned with climate change, concerned with the sustainability of our recycling habits. You can see the kind of logic built into this product. The fact that now you can basically fit your entire Facebook circle into a trash can makes it possible to exert new types of pressure on you as a user and as a citizen. The idea there is that they are not using sticks; they are using carrots in this particular case. They think that if you can have people compete with each other for points, you can get them to engage in new types of behavior or provide new types of motivations that weren't available before.

This, by the way, is a very big trend in Silicon Valley. It's something known as "gamification." Gamification is this effort to try to turn pretty much everything into a game where you are collecting points for something, competing against other people. The idea behind gamification is that you would be able, as a company or as a policymaker, to get people to do things that they would not otherwise do as easily. This suddenly becomes a game that they are playing against their Facebook friends.

Again, this might sound very surreal and trivial, but a lot of people, both in Silicon Valley and in policymaking circles, are very excited about this, because it can motivate people to do things they wouldn't do otherwise. So it becomes a very tempting option for problem solving.

Just to give you an example, last year one of the biggest proponents and theorists of this gamification trend wrote an op-ed for Huffington Post where he argued that one way to solve the problem of civic participation in America is to start rewarding people with points for showing up at the election booth with their smartphones and checking in online, the way they would check in at a bar or a pub or a restaurant, as they now do with all sorts of apps like Foursquare. The idea is that, because you carry your smartphones with you everywhere you go and you carry your friends with you everywhere you go, everything can be turned into some kind of a competition and you can get people to engage and do things that they wouldn't easily do previously.

It's a new type of logic. It's like applying the logic of collecting frequent flyer miles to pretty much everything that you do, only that you're earning points that can then make you look nicer or stronger than your friends, or you are collecting points that you can later redeem into a latte. Many of those virtual points are then convertible into products.

Essentially, the logic here is: Engage in recycling behavior; earn a latte. That's the logic that some of these designers are appealing to. Of course, while it might increase efficiency, you have to understand that it's a substitution of language.

It used to be that we wanted people to engage in behaviors like recycling because we thought that that's part of good global citizenship. We thought they should to it for moral and ethical reasons, because that was the right thing to do. Now, with proliferation of this new what I call problem-solving infrastructure, it becomes possible to bypass that moral register and appeal to people as consumers and as gamersor whatever other label you want to plug in hereand basically bypass the moral and ethical considerations altogether.

This is where it gets very tricky, because essentially we are relying on a market-like incentive structure to get people to engage in behaviors that previously were regulated through morality and ethics. This is a very tricky aspect here because again, if you start rewarding people with points for showing up at the voting booth on Election Day, and if you start appealing to them to do things that they used to do because they were good citizens, now, with proliferation of all sorts of market incentives, it's not obvious that the next time they are faced with litter on the pavement, they will pick it up, unless they expect to be rewarded with frequent flyer miles or bonus points or you-name-it.

So there is a very, I would say, unexplored ethical lacuna here, in part because no one in Silicon Valley is interested in thinking holistically. No one among the technocrats who are aiming for these schemes is interested in treating citizens as complex beings who inhabit multiple worlds at once. You want to solve the problem of participating in elections, but you are not necessarily solving the problem of picking up litter on the pavements. You are trying to maximize efficiency in one context by perhaps also losing the ability to appeal to citizens as citizens in another context.

So this is sort of the emergence of this new problem-solving infrastructurethe proliferation of sensors, on the one hand, and the ability to carry your friends everywhere so that new kinds of peer pressure and competition become possible.

To give you an example of how a company like Google can become the new favored problem solver for many of our policymakers, just think about the implications of a gadget like Google Glass, which some of you may have heard about. It's a new gadget that Google will put on the market in the next few months. It's already available.

It's basically a pair of glasses that costs $1,500. It has a minicomputer built into it. It has a camera in the minicomputer built into it, so pretty much everything you see in those glasses is analyzed. You can get reminders about where you need to be, where you need to go. It has the capacity to recognize objects. It will know that now I'm looking at a glass. It will know that now I'm looking at a wall. It knows the color of the wall. It can tell me what other walls like this exist elsewhere in the universe. You are basically building a minicomputer into a visual field, so that objects suddenly become recognizable and it becomes possible to tinker with what you see and what you doagain, another way of steering the behavior.

Some of the same logic applies to the smartphones and the rest of Google's services, which now, thanks to sensors, can recognize the activities that we engage in.

I'll get back to glasses, but to stay on the smartphones for a second, there is this new app from Google called Google Now. Google Now tracks your activity across all of Google's multiple services. It tracks your email inbox. It tracks your calendar. It tracks what you do in Google Books. It tracks probably what you will do with the Google self-driving car. It basically tracks your interaction with the entirety of Google services.

What it does next is try to predict what you might be doing in the next five, six, or seven hours, based on what it already knows about you from that data. Let's say you have a reservation for a plane in your inbox. Google knows that you will be jumping on a plane in five or six hours. What it will do nowit's not science fiction; it's already available and you can install this app on your Android phoneit will check you into your flight automatically without you asking it to do anything. It will check the weather at your destination, and it will tell you whether you need to take an umbrella with you or not. It will also check the real-time traffic conditions on your way to the airport, so that if the traffic is busy, it will warn you that you need to leave an hour early because the roads are busy.

It basically promises to take hassle out of your life, which, for most of us, would sound like a pretty good project. But because it has sensors built into it, it can alsolike any smartphone now, it has an accelerometer, for exampletrack all sorts of other behaviors. In this case, Google tracks how many miles you walk every month. At the end of the month, it tells you that this month you walked 20 miles and it's 20 percent better than the last month. That's the index card, the reminder, that it generates automatically without you ever asking for it.

That's an intervention that Google thinks is benign and good, because it can get all of us to walk more. This is the idea. You can see why such an idea would appeal to many policymakers. It's another way to try to solve problems like obesity and lack of exercise.

You can think about this further. Let's think about Google glasses. If you wear them every day, which is how they are meant to be working, they would know pretty much everything about what you engage in. They will know what you had for breakfast. They will know what you had for lunch. They would be able, if policymakers want or Google wants, to do what your smartphone does for your walking patterns. They can do the same for your nutrition patterns.

Researchers in Japan have built similar glasses now, which do the following: When you come to a restaurant and order a portion of fries with a steak and a large Coke, the glasses will actually make that portion and the Coke look larger than they are so that you would feel full sooner, which, for them, is a way to fight obesity.

You can understand that nothing at all prevents Google from building similar apps for Google glasses. Basically, since the glasses can recognize objects, they can also manipulate and tinker with them. If you show up at a restaurant and the Google glasses already know that you have been eating too much fat that day, it can remove or black out high-fat items from the menu. That's the kind of intervention and tinkering that was impossible five or ten years ago that has suddenly become possible now.

This is part of the broader argument I'm driving towards. We have suddenly acquired the means of perfection, one way or another. The environment has become very programmable. It becomes possible to manipulate the environment, because of all the sensors, devices, and smart interfaces, in new ways to completely eliminate the temptations or to start fighting problems like obesity or climate change in new ways.

For obvious reasons, people in Silicon Valley and some people in Washington are very excited about these projects. Silicon Valley is excited about them for two reasons. First of all, if Silicon Valley is seen as the next big problem solver that can help us solve climate change or help us solve obesity, we would probably regulate it less rigorously than we might otherwise. Who would want to stop Google from solving all of those big problems by over-regulating how they collect data?

A service like Google Now, that app that tracks how much you're walking, has become possible only because Google has managed to push through what they call singular privacy policy. Up until last year, for example, what you were doing on YouTube wasn't at all connected to what you were doing on Google Search. Google didn't connect those two services, even though it ran both of them. Every single Google service had its own privacy policy, and Google couldn't just take data from YouTube and mix it with data from Gmail and mix it with data from the calendar and then start making predictions about what will happen in the next five hours.

After they pushed through this new singular privacy policy, they have brought all the services under one umbrella. Now they can look at what you do with your Gmail and mix it with what you do in your calendar and mix it with what you do with search, and in the future, mix it with what you do with Google glasses and what you do with the Google self-driving car, and come up with all sorts of predictions.

The argument, of course, as you probably understand by now, that Google makes is that gadgets like Google Now can help us solve problems in new ways, so why we should worry about this singular privacy policy? That's the argument that is being made. We need to figure out whether these new problem-solving abilities and capacities provided by Google are worth giving up our privacy for.

So this is one big issue, and I think we need to be very careful.

Another minor reason why Silicon Valley is so excited about taking on some of these big humanitarian, sort of do-gooder projects is because they are competing with Wall Street for the same pool of talent. They are competing for the same computer scientists who would otherwise be building models in hedge funds or they would be building predictive models for advertising on Google and Facebook. Of course, if Silicon Valley positions itself as essentially being in the same business as Transparency International or Human Rights Watch, while at the same time claiming that Wall Street is evil, it's obvious that most of the college graduates would rather end up in Silicon Valley than on Wall Street.

So there is a very self-motivated argument that Silicon Valley makes about being the new cool place to work, because "we are not just selling advertising; we are also solving some of the world's greatest problems." I have quotes in the book from Eric Schmidt and Mark Zuckerberg and many of these other CEOs, where they explicitly proclaim that their mission is not just to wake up in the morning and make money; it is to actually go and save the world: "Let's go and solve some of the world's greatest problems."

Just to give you an example of some of the costs and consequences that come once we delegate problem-solving to Silicon Valley, imagine that policymakers become really excited about this Google Now app that can track how much we are walking and will thus tell us that we need to walk more at the end of each month. I'm pretty sure that policymakers would eventually be very happy with such an app because it's a new way of tackling problems that doesn't really require much of them.

In Britain, one of the think tanks that is very close to the coalition government now, a conservative think tank, put together a policy paper where they propose that they can basically cut health benefits to obese people who do not exercise at the gym. The way they would track it would be by studying whether they are swiping their smartcards when they are getting into the gym or not.

The idea there is that you can actually make many of the transfer payments for insurance or for many other things conditional on certain behaviors, and those behaviors now, because we have all the smart technology and sensors everywhere, suddenly become possible to monitor and enforce. If you are carrying your smartphone that monitors how much you are walking, it doesn't really require much to ask that smartphone if you are walking enough to justify the payment you receivewhether you need to be paying more to your insurance company, for example. There are all sorts of interventions that become possible once this data is out there.

But what really bothers me about delegating some of this problem solving to Silicon Valley, and to Google specifically, is that if we do that, we'll be tackling problems in a very particular way. It's one thing to tell citizens that they need to walk more, and now we have the self-tracking apps to monitor whether they are walking more or not.

It's a very different kind of intervention to actually go and investigate why they are not walking more. It might be that they are not walking because there is nowhere to go except for the mall and the highway, because the infrastructure is missing. So giving them the apps to track how much they are walking is not going to solve the big structural/infrastructural problems and challenges that give us problems like obesity.

It's the same thing to be able to track how much fat or junk food we are consuming through Google glasses. It's another thing to actually allow for access to farmers' markets or to regulate the food industry, to regulate the junk food industry, how it's advertising to kids. There are all sorts of big structural projects that we also need to embark on, instead of just delegating all of this decision-making to citizens, and offloading it to citizens, essentially, and just telling them, "You need to optimize your behavior within the constraints of the current system."

This is what Silicon Valley would excel at. It can give us the apps so that we'll optimize our own behavior within the current system. But if you really think about reform, and ambitious reform, the goal should be to try to reform the system itself. You cannot just be giving out apps so that we'll optimize how much we walk and how much we eat. We also need to ask broader questions of why we behave in certain behaviors and why we don't engage in others, and what some of the structural constraints are that prevent us from changing our behaviors in the first place.

This is why I'm also so skeptical about the current excitement for ideas like big data. Some of you may have heard a lot about it. One of the main arguments promoted by people who are big data enthusiasts is that since now everything is tracked and recorded, we have just so much data out there that we can forget about causality and just focus on correlations. We would be able to know that input absolutely leads to outputwhy people who walk less tend to be fatter. Then the intervention is to have them walk more and give them the apps to track how much they are walking.

Again, there is something, to me, that is very worrying about the idea of replacing causality with correlation, because if you do want to engage in reform, you do need to understand the causal factors that you will be reforming. If you just focus on correlations, all you'll be doing is basically adjusting the behavior of the system without understanding the root causes that are driving it.

So the proliferation of big data and the ability to track things that we do is good only if we can actually understand why we engage in those behaviors. The ability to understand why I think is fundamental to understanding what it is that needs to be changed.

The bigger takeaway message from this is that we have to be very careful about delegating some of the problem-solving to private parties. We have to be very careful about delegating it to Silicon Valley. Their business models essentially revolve around sucking in more and more data that we share for all sorts of reasons. The way in which the situation is going right now is that those of us who would refuse to self-track, those of us who would refuse to take on sensors, those of us who would say no, increasingly would find it very hard to interact with all sorts of social and commercial entities because they will make assumptions that we have something to hide when we refuse to self-track.

Think about the current generation of insurance companies that will give you a sensor to track how safe you are driving. If you are driving safe enough and you can prove it with a sensor attached to your car, you can then get a discount.

The same logic now applies to all sorts of apps. You can install an app on your phone, and if you can document that you're walking enough and show it to your insurance company, you will end up paying less than the average person, because you will show that you are healthier than the person they use in their models when they price out their insurance plans.

A lot of people take this on. A lot of people are very excited about it because the ability to monitor themselves allows them to get better discounts, allows them to get better services. There are economic incentives for a lot of us to self-track, because if we can self-track, then we can get better treatment. So a lot of people are taking on these sensors and self-tracking apps. There is an entire movement now emerging in America called the Quantified Self. Those are people who are trying to quantity everything about their behavior, in part because they expect benefits.

But the big point I'm trying to make here is that those of us who would refuse to self-track when the majority of people do self-track will be treated with suspicion. The assumption would not be that you refuse to self-track because you want to exercise autonomy or you fear about your privacy. The assumption would be that you refuse to self-track because you are not walking enough or you are not a safe driver or you eat too much fat.

Once such logics are in place, you would no longer be able to believe Silicon Valley, which tells you that the decision to self-track or not is just an individual choice; it doesn't depend on anything; it's your autonomous decision. It's no longer an autonomous decision when the social institutions work in such a way that not self-tracking is punished with higher payments for insurance or something else.

Now I will stop, and we can open it up for questions. Thank you so much.

Questions

QUESTION: Ron Berenbeim.

There was one aspect that at least I didn't hear your views on, and that is the use of this sensing, so to speak, in terms of motivating, directing, and driving political behaviorvoter patterns and party loyalties and so on.

Also, a kind of related question. You may have heardI think I read something in the newspaper about it or somethingthat there has been a very active debate about gun registration and so on in this country. It occurred to me that if you had this kind of self-sensoring behavior, with economic premiums for people who, in effect, self-reported, some of this kind of stuff, the idea of what the solutions were, would change.

EVGENY MOROZOV: With regard to political behavior, if you are talking about mainstream politics and voting, I didn't touch upon that. But I think when you think about climate change and participation and such problem solving, it's also political. Getting people to engage in recycling, to me, is sort of a political act. But to talk about elections, I think here we will see actually very dangerous trends, in part because it's not just sensors; it's the ability to customize your media environment and shape the message to you as an individual voter, based on what the advertiser or, in this case, the political party knows about you.

Now we all inhabit our own little media bubbles, we all have access to customized media, our Facebook news feed, our Twitter account; the information we get through Google News is highly customized. It will be possible to narrow and target and to shape the political message individually. So all of us here might hear from the same candidate, but we would all hear different things, and we wouldn't know that we heard different things, because the ad will be customized to us and it will be shown only once. Nothing prevents that from happening.

Of course, in the age of broadcast media, where there was one ad that the entire country saw, it was a very different environment, where you could actually go and scrutinize the ad. If the candidate makes a message that is essentially conditional in that is dependent on your history of interactions with Facebook, it will be a very highly customized message, and you wouldn't even know why you engage in certain behaviors.

I often give an example. Let's say your supermarket has a cooperation deal with Facebook or Google. Many of them now have deals with Facebook, for example. Facebook now has a deal with a data collection company that tells Facebook what products you buy through loyalty programs at supermarkets. So essentially offline behavior is integrated with your online behavior into a single profile.

Imagine your local supermarket knows that three days ago you put in a Google search: "How to become a vegetarian." Maybe your local supermarket doesn't know it was you by name, but it knows that someone in your building or in your ZIP code or on your street has done that search.

Imagine that the next time someone with a smartphone that is registered in your ZIP code shows up in a supermarket, immediately your smartphone starts buzzing because you start getting coupons and discounts on meat products in your supermarket. Or you walk by a local restaurant and your local restaurant sends a message to your smartphonewhich now is possibletelling you that are invited to a free steak dinner. After three months of such behavioral manipulations, you suddenly decide, "Why on earth should I be a vegetarian?" You decide not to become a vegetarian, but you have no idea that all of that happened because someone in the background was pulling those strings.

So you think you have reached an autonomous decision, while the decision was anything but autonomous, because the environment has been configured in such a way as to make that impossible.

I didn't answer the gun question but I think it's worth exploring. I'm just not sure to what extent it would solve the gun problem. I'm ready to explore self-tracking. People who don't own guns will report themselves, but it's not they who are the problem. In this sense, it might not solve it.

QUESTIONER: So there's a limit, possibly, to what this kind of thing can accomplish?

EVGENY MOROZOV: Oh, yes. Again, for different kinds of behavior, where you have different incentives, then the efficacy will change.

QUESTION: Susan Rudin.

I find this very frightening. It's almost like Big Brother is going to be watching us. I'm not on Facebook. I have everything else, but I don't have Facebook. So can they not track me?

EVGENY MOROZOV: I think it's an illusion at this point that they cannot track you. Facebook probably cannot track you, even though, apparently, a few months ago, there was a revelation that they even can track people who are not currently logged into Facebook, but who are browsing sites like CNN or New York Times or any other website. All of those sites have data-sharing agreements which, of course, we don't know about. You might visit a website to look up a word. It might be a dictionary website. But that website would be sharing data with 100 other intermediaries, which might end up taking that data and putting it somewhere.

So while your data may not reside on one of Facebook's servers, there are so-called data-collection agencies. They are not so much interested in you as a Facebook user. They are interested in you as someone who lives in a certain ZIP code, has a certain income, and they have ways to connect the data that they collect from the rest of the Internet to your profile.

So not being on Facebook is not enough, because the rest of the Internet works on exactly the same logic as Facebook. Facebook just does it in a very concentrated and a very strategic and centralized manner, but other websites do it, too.

Then the question becomes, who are the intermediaries who are purchasing all this data? It might be that, by themselves, the fact that you have searched for a product on one website and the fact that you have searched for a product on another websiteit might be that it doesn't mean anything. But once you start putting those little factoids next to 100 other factoids from other websites, a very complex picture emerges.

These data intermediaries that are in the middle drive the entire primary and secondary markets in data now. You can actually go and purchase such information. Again, it might not be linked to you by name, but it will be linked to your ZIP code or your street address or you-name-it. And it would be possible to de-anonymize it once you start adding it to other data points.

So you shouldn't entertain the illusion that you're not part of some database because you're not on Facebook. That unfortunately is not the case.

QUESTION: Jennifer Tavis.

What do you feel is an appropriate role for government in managing the risks associated with these technologies? I know you have talked about how government is, I think, complicit in allowing certain things right now. But in an ideal world, they are the ones who have the power, I think, to actually set some limits around it.

EVGENY MOROZOV: There are two potential involvements from governments. One is regulating what happens to data that is being collected. Now six governments in the European Union are suing Google for pushing for this single privacy policy last year. They have consistently been going after Google because Google doesn't let you know what happens to your data once you share it. They don't want to disclose the life of the data once it has been shared with Google. It just goes against some of the existing laws in the European Union. I think they are very reasonable in pursuing Google and pushing them to at least disclose what happens to the data.

I can assure that once most of you know what happens to your data once it enters the Google universe, you will feel much more terrified than you are now, when you just think it's just stored somewhere neatly and secretly on some server and no one has access to it.

So pushing in that direction, I think, would be a good idea. I don't think it's ever going to happen in the United States, for various reasons, some of which have to do with general attitudes towards privacy and the idea that somehow economic growth is more important. There are all sorts of issues here. The existing data protection laws in the United States are just not strong enough to mount any challenge.

The second aspect that I would like to highlight here is, not just regulating those companies, but understanding how much, as I have said, of this problem-solving you want to delegate to Silicon Valley, to Google, and to Facebook. Do we actually want a bunch of technology companies tinkering with our trash bins in order to engage us in more environmentally friendly behavior? There are all sorts of other implications. That's something I haven't mentioned yet.

Think about that trash bin project I mentioned. What happens, once your photos of your trash travel through the Facebook universe to impress your friends or to win points, is that they also store it somewhere. It technically becomes possible for the FBI or any law enforcement agency to go to Facebook and ask them what was in your trash bin three weeks ago. That was not possible before. We had all sorts of legal fights about whether the FBI can look into your trash. But there was no technical possibility to know what was in your trash bin a month ago, a year ago. Now, if that data is kept somewhere, it suddenly becomes possible.

Once you wear Google glasses, I can assure you that what you see through Google glassesthat data will not just be thrown away or discarded. The data will be stored somewhere secretly by Google. They will tell you that it will only be accessible to computers, and computers will analyze it in order to show you ads. You will be able to get a pair of Google glasses for free if you agree to see five ads a day somewhere on the left of the screen or something.

To be able to generate those ads, Google would need to understand what it is you are seeing. I can assure you that, based on what we know about Google so far, that data will be stored somewhere on a Google server for a very long time, which again makes it possible to come and ask what it was in your visual field three weeks ago. The proper name for Google glasses is surveillance cameras. It's not Google glasses.

The marketing of such devices makes promises that I think we need to understand, and trying to understand the implications of storing all this data is another big challenge for policymakers. But maybe we shouldn't be allowing these companies into our trash to begin with. If we don't, then we wouldn't need to think about all the data challenges.

QUESTION: My name is George Paik. Thank you for a wonderful presentation.

As you go through all these developments, I can already hear my tech friends saying, "Oh, we can just make programming better and allow for nuances and so forth." At some point, it strikes me, you come down to a certain kernel issue of, as you say, what is the value of this autonomy? Which gets into the philosophical debate or whether or how much the human being is a complicated response mechanism or something different that can't be captured. I think I detect where you would lie in that question.

My question is, aside from yourself, what philosophical engagement is there with direct implications on that, couched in terms of this technology? Is there anybody good that you can argue with at this point? Are there supporters? Has this been closed? What is the nature of the discussion to this point? Or are there implications that we need to put into our philosophical community question?

EVGENY MOROZOV: I think the situation is pretty bad in the technology circles. There are debates that are happening outside about the intrusion of market logic and market incentives into domains that are regulated by morality and ethics. Michael Sandel has been writing about it for many years now.

What I tried to do in the book is to actually build bridges to those debates in philosophy and morality and ethics, because the number of debates about this issue that's one of the reasons why I wanted to write the book, because people who have written that paper and built that smart trash binto them, it hasn't even occurred that motivating people to recycle through games might be problematic ethically.

QUESTIONER: You are asking this debate to start?

EVGENY MOROZOV: That's the goal. Such debates are happening outside of technology, but the designers and technologists who are building these artifactsfor them, efficiency is the main criterion. As long as you can get more people to recycle or as long as you can get more people to the voting boothregardless of whether you need to appeal to morality or whether you need to appeal to collecting points, efficiency is what matters. That's the only criterion.

What I'm trying to doand that's something I haven't talked much about in this presentationis trying to understand the limits of perfection and the values of imperfection is something that I'm trying to articulate. Surely there should be other values than efficiency that should be driving this debate. There should be other values than just mad commitment to innovation.

I wish I could point you to unfolding debates that are happening philosophically about autonomy, but they aren't. It's a sad state of affairs.

QUESTION: Richard Valcourt, International Journal of Intelligence.

Zadanski [phonetic] has written about the adaptation of the Soviet society and Soviet bloc countries to this monitoring of the population. We're experiencing that in the commercial sector now. But there is going to be a linkage between that, even in terms of benevolent government. We're going to be using it to do the Medicare systems, all of this.

But there is a dividing line. Where do you see it happening? The subtitle of your first book was The Dark Side of Internet Freedom. This is the dark side. But here is the other aspect of it. It's going to move over into a less benevolent government at some point in time, because that's the nature of things. Where do you see that happening?

EVGENY MOROZOV: If you are asking about when the likes of China and Iran will have the same smart trash bins? It's already happening now. The leading Chinese search engine, Baidu, has come up with the same smart Google glasses that Google did. They will be putting them on the market this year as well.

I would say that many of the companies you now have in Russia and China and Iran are pretty sophisticated when it comes to technology. The only reason they are less visible is because there is some kind of economic and political nationalism in the United States that prevents them from operating here.

The way we think about it is that when Google dispatches its cars to China to go and document every single building in China, it's all part of the contribution to some humanitarian project and enlightenment. We all think it's great. It's organizing the world's knowledge. Imagine what would happen if one of the Chinese search engines decided to send a fleet of cars to America and document every single building in New York or somewhere in the Midwest. There would be an outcry, because we would think that it's a spying operation.

For this reason, many of these companies cannot make it in America. For this reason also, a lot of governments outside of America are very suspicious of any Internet freedom that comes from Washington, because Washington itself is not really interested in bringing those companies to America.

In this sense, I don't really see much of a dividing line. What, to me, is obvious is that many of these devices and gadgets will be sold to citizens in those countries, again, as ways to tackle climate change, you-name-it. But since you do not have the same strong legal protection as we have herethat also is a big question, how strong this protection isthere, the likes of the KGB would be able to go and have a direct line to a trash bin on a 24/7 kind of basis. You wouldn't have to go through the courts. So in that sense, the implications of the system are much more severe in those countries, in part because you do not have any legal layer in between that might prevent over-exploitation of the smart environment.

JOANNE MYERS: You talk about all these apps being developed. Is there a perfect app that would counter all these other intrusions?

EVGENY MOROZOV: I think that's the temptation that we have as citizens, to tackle the intrusion of such technologies with more technology, while what we actually need are better laws and reforms and social debate.

I would oppose such an app. I would oppose it, again because it gives you the illusion that these problems are solvable through technology, and they aren't. Now you can go and install all sorts of browser extensions in your browser to make it harder for the likes of Facebook to suck out data from your Internet searches or from your social networking activities. Some of it is good, because we can all go and do something, at least, to stay off the radar. But that surely could not be the only way to go, particularly as many of those apps now charge money. Why should I be paying $10 a month to defend my privacy while I pay my taxes to my government to actually go and do something about such problems?

There is this bizarre privatization of problem-solving through apps which I also find very suspicious. I think we need to be more skeptical about it and try to oppose it. There are good reasons why we have public institutions and governments that are supposed to tackle problems. You cannot just be, as a citizen, going and doing everything by yourself, even though the means of perfection are there.

This is the argument that all the people in Silicon Valley are making now. Why do you need newspapers and editors who can go and actually select what news is important if you can just go and read 5,000 blogs and come up with your own picture of what matters that day? Why do you need schools and universities when you have TED talks on YouTube and you just can go and watch TED talks for days on end?

There are all sorts of bizarre arguments now that try to offload all of this to citizens in a way that I find very ugly and suspicious and just anti-institutional. So in that sense, I would oppose such an app.

QUESTION: I'm Victoria Kupchinetsky, Russian Service, Voice of America. Evgeny, thank you so much for a wonderful presentation.

Can you actually compare the situation that we have right now with the situation that we had in the Soviet UnionI grew up in the Soviet Unionin terms of the control? Thirty years ago, we had a government that wanted to control the minds of its citizens. Do you think that the situation that we have now is more grave, in a way, deeper and more global than what we had in the Soviet Union 30 years ago?

EVGENY MOROZOV: At this point I don't think governments are very much interested in controlling your mind. People seem to be busy enough exploring the world of apps and cat videos on their own. In that sense, I find it very hard to compare the situation that we had in the Soviet Union and the situation we have now. The means of control, if you will, are very different. Not that you could compare the situation 30 years ago between the United States and the Soviet Union. It's not that they looked any more alike then than they look now.

I don't know. What merits such a comparison, in your view?

QUESTIONER: The things that you get from different sources are controlled by these sources, and you have a lesser and lesser way to make your own individual choice.

EVGENY MOROZOV: In that sense, I think what's being sold to us is the exact opposite. Silicon Valley would tell you that you're in perfect control, that you should just install 25 apps on your phone and make more decisions and put even more Google searches about whether you should become a vegetarian into Google. The idea that's being sold is that you have full autonomy. And I think Silicon Valley sells it really well. It's all about thinking differently, but it's also about thinking.

Silicon Valley polices those things really well. The Soviet Unioncertain bureaucrats would talk that way and they would tell you that you also have democracy and autonomy and whatnot, but no one really treated it seriously. People in this country treat what Silicon Valley tells them very seriously. People actually believe in empowerment through apps on cell phones.

That, I think, is the main difference. Again, that's one of the reasons I wrote this book. Many people do take the promises from Silicon Valley for granted and they do take them seriously. They no longer think about the political implications of using apps for problem solving.

I mean, some people surely took Soviet officials seriously, but my recollection is that most of us didn't. That's the bottom line, I guess.

JOANNE MYERS: Evgeny, thank you for laying the foundation for the beginning of the debate on the ethical issues of the Internet. Thank you so much. That was just perfect.

EVGENY MOROZOV: Thank you so much for coming.

You may also like

Dr. Strangelove War Room. CREDIT: IMDB/Columbia Pictures

DEC 10, 2024 Article

Ethics on Film: Discussion of "Dr. Strangelove"

This review explores ethical issues around nuclear weapons and non-proliferation, the military-industrial complex, and the role of political satire in Stanley Kubrick's "Dr. Strangelove."

DEC 3, 2024 Article

Child Poverty and Equality of Opportunity for Children in the United States

This final project from the first CEF cohort discusses the effects of child poverty in the United States and ethical solutions to help alleviate this ...

DEC 2, 2024 Article

Global Ethics Day 2024 Reaches New Heights with Participation Across 70 Countries

On October 16, 2024, hundreds of organizations and thousands of individuals across nearly 70 countries celebrated the 11th annual Global Ethics Day.

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation