The Driver in the Driverless Car with Vivek Wadhwa

Sep 6, 2017

What are the social and ethical implications of new technologies such as widespread automation and gene editing? These innovations are no longer in the realm of science fiction, says entrepreneur and technology writer Vivek Wadhwa. They are coming closer and closer. We need to educate people about them and then come together and have probing and honest discussions on what is good and what is bad.

JOANNE MYERS: Hello. I'm Joanne Myers, director of Public Affairs programs here at the Carnegie Council. This podcast is coming to you from New York City from the Carnegie Council headquarters.

Today I'm being joined by Vivek Wadhwa, co-author of The Driver in the Driverless Car: How Our Technology Choices Will Create the Future. This book was written with tech journalist Alex Salkever and has just been nominated by the Financial Times for Business Book of the Year. The ideas presented within will be the focus of our discussion today.

Our guest is an entrepreneur and technology writer living in Silicon Valley, but his résumé doesn't end there, as he is also a distinguished fellow at Carnegie Mellon University's College of Engineering and director of research at Duke University's Pratt School of Engineering. He is a globally syndicated columnist for The Washington Post and has held appointments at both Stanford and Harvard law schools, University of California-Berkeley, and Emory University. Currently, he is an adjunct at Singularity University. In 2012, Foreign Policy magazine named him one of the world's top 100 global thinkers.

Vivek, thank you for joining us today.

VIVEK WADHWA: It's great to be on with you, Joanne.

JOANNE MYERS: So let's begin. There is no question but that enormous new technological discoveries—for example, in big data, artificial intelligence (AI), and automation—are advancing at a rapid pace, making science fiction a reality. While many of these developments can make our lives better, others could present hazards to life as we now know it.

In your book, The Driver in the Driverless Car, you do something quite special, because not only do you excite us about the future, but in addressing both the social and ethical implications of new technologies, you admonish us to be vigilant. If we need to be mindful of all these new emerging technologies, are there specific questions we should be asking?

VIVEK WADHWA: Joanne, if you look at the state of politics in the United States and abroad, you see the extreme left and the extreme right both at the same time rising. If you peel the onion a little bit, you begin to see it is the same people—just a few differences in outlook, but they're all angry. They're feeling left out, they're feeling resentful about the fact that they're not participating in this revolution. There is a widening gap not only in income but also in culture, in values, in knowledge, and in being able to prosper from these advances.

So the first thing that I highlight is that we have to fix this. Right now we have the opportunity to build the amazing future we saw in Star Trek, that future in which we had all of our wants and needs met, in which it was all about the pursuit of knowledge. We were exploring new worlds. And it wasn't about money, it was about enlightenment. That is an amazing future we can be building. At the same time, if we don't get this widening gap under control, we could see the dystopia of Mad Max, where anger reaches a boiling point and we start destroying each other.

So the first thing we have to do is to make sure that everyone benefits from these technologies, and the fact is that with most technologies they can benefit from it. We need to make sure that there is equity not only in wealth but in technology, knowledge, and so on. If we do that right, we can withstand a lot of the negatives that will come from the technology, and we can build an amazing future.

JOANNE MYERS: Do you have any suggestions about how we can prosper through automation without leaving people behind, because so many people are fearful that they are going to be losing jobs, that they will be replaced by robots, etc.?

VIVEK WADHWA: Let's use the example you just brought up of losing jobs: The fact is that jobs are going to disappear. There is no stopping them. If you look at the way robots are advancing, robots will soon be flipping hamburgers, they will be taking our orders at McDonald's, and they will be sweeping the floors. Robots will also be doing a lot of the work that lawyers do because they will be able to analyze data better than they can. They will also be able to do the jobs of doctors because they will be able to understand data and trends better than doctors can. So almost every job which requires physical labor or knowledge is going to be under threat from automation, which means that there will be less work for us to do.

But at the same time, if you have these robots doing all the work, the cost of everything becomes exponentially less. For example, the cost of energy is going to drop to the point that it is almost free because of advances that have been made in solar and wind. The cost of transportation will be practically zero because we will have self-driving electric cars on our roads that take us where we want to go.

So now the question becomes: If we have all the things we need, do we really need to work? Why do we work? Our grandparents often worked 60, 70, 80, 90 hours a week on the farms or in the factories. They worked extremely hard lives. We don't work that much. We complain about 40-hour work weeks. Why does it have to be a 40-hour work week? Why can't it be a 10-hour work week? Why can't it be a five-hour work week? Or why can't we have the focus be on other things, on helping others, on looking after the elderly, on practicing and sharing music and knowledge? Why does the system have to be the way it is today?

So, yes, we are talking about joblessness, but the question is: Can we develop a new society in which our value system is different and in which we are focused on giving back?

All of the things I'm talking about today that seems like science fiction—this isn't science fiction. This is all coming closer and closer.

JOANNE MYERS: People need money to buy food. There may be more abundance of food, we may be having greater access to water, but we have to pay for these goods, and if you don't have a job, how will you pay for the goods that you need to survive, I guess is the question.

VIVEK WADHWA: In Silicon Valley they've got this romanticized notion of a universal basic income where everyone gets money for free. I think something like that is needed, but not the way Silicon Valley is hyping it, because that's socialism, and no one wants a socialist, communist society where government makes all the decisions and gives you what it wants to give you.

So we need to have a mechanism by which people can earn money and that they do have their basic wants and needs met. When the cost of food, energy, and transportation drops to almost zero, it won't take much money to satisfy wants and needs. But we do need to have a social structure, and in the book I discuss the fact that these things are happening and that we need to start thinking about it.

Joanne, I do not have all the answers. I can see the trend, I can see what's happening because it's so obvious when you look at the core technologies, but I don't have the solutions. And what I keep arguing is: Let's all come together and figure this out. We need you and what your organization is doing, as one of many others now, shifting gears and thinking about these bigger issues.

I admire the fact that you understand these issues and that you understand enough to be concerned and to have this discussion with me, which tells me that you really get it. Right now the question is: Can you lead the solutions over here, because this is more your job than my job.

JOANNE MYERS: Let's just leave that where it is for the moment. But going back to the title of your book, The Driver in the Driverless Car—because you brought up how the roads will be taken over by the driverless car, etc., I need to ask you—I know you're of Indian descent, and I just recently read that in India the union minister for road transport, Mr. Nitin Gadkari, said: "We will not allow driverless cars in India. We are not going to promote any technology or policy that will render people jobless." What are your thoughts about that?

VIVEK WADHWA: I think that that transport minister is going to be jobless before he knows it, because he is smoking something. He is out of touch. You can't stop technology.

In India, if you just watch a video of any road in India, you'll see it's crazy.

JOANNE MYERS: Yes, I have been to India. I know, I know.

VIVEK WADHWA: India needs driverless cars more than we do because that's the only way there is going to be law and order on those roads. Hundreds of thousands of people die every year from road accidents. You can't get from one part of any large city to another in less than a couple of hours. It is just sort of pathetic how bad it is.

JOANNE MYERS: Right.

VIVEK WADHWA: India desperately needs this stuff. So, let's say: Well, let the technologies work, let them see them working in Pakistan and in Sri Lanka, and before you know it there will be people writing against these ministers and throwing them out of their jobs, and they will bring the technology in.

JOANNE MYERS: I absolutely agree. Also, I think they should institute engineer-less trains because that would prevent a lot of the train accidents as well.

VIVEK WADHWA: Exactly.

JOANNE MYERS: Going back to the technology with the potential to benefit everyone equally, you also bring up as a second point—one that we all should be asking—"What are the risks, and what are the rewards?" How do you suggest we go about weighing them? It's a balancing act for some people, as well as others, like the inevitable merging of man and machines. Some people would say it's a good thing, while others would say that robots are dangerous, that they are a threat to our existence. Could you talk a little bit about that?

VIVEK WADHWA: Absolutely. As far as this man and machine goes, do you wear glasses, or do you have anyone in your family who wears eyeglasses?

JOANNE MYERS: Of course.

VIVEK WADHWA: If the answer is yes, you have already been enhanced. It is unnatural for us to have these things on our faces which give us better vision.

So now the questions becomes—we also get hip replacements, we get little metal rods put into our legs and into our knees and so on, so we already get some types of enhancements. What if now you could get enhancements that let you walk up mountains, climb mountains actually, that give you back your strength? What if we could now get retina implants that give us perfect vision? We already have hearing aids. What if those hearing aids now give us 10 times more capability than they have today? Who would say no to those? Those are already things that we are doing. So the question is: Where is the line between man and machine?

On that front, we're going to be debating 15 or 20 years from now whether we want to be like Steve Austin in The Six Million Dollar Man. Do you remember The Six Million Dollar Man? We're going to have those capabilities before we know it. So the question becomes, how much enhancement do we get?

Now in the question of risk and reward, I'll tell you something I'm terrified about. I talk in the book about CRISPRs (clustered regularly interspaced short palindromic repeats), which is really gene editing. About five or six years ago, scientists at Berkeley and MIT (Massachusetts Institute of Technology) figured out how they could edit DNA, in other words, how they could change life itself, get into a Word document and change the letters that make up our genome. Really this stuff seems like science fiction; it's happening right now.

But soon we will figure out exactly what makes us intelligent. What are the genes? It may be 100 or 200 genes that do it, but we will figure out what they are. We will also figure out now how to eliminate hereditary diseases. Let's say that in your family you have Huntington's disease or you have some other horrible, debilitating diseases, and you are about to have a child, and the doctor does a blood test and says, "Your child is going to have this disease." Let me you ask you a question: If the doctor said that you or your daughter or whatever was going to have a child who has a horrible disease and you had the option of taking a pill that would edit it out, would you do it?

JOANNE MYERS: Yes, but of course.

VIVEK WADHWA: Okay. So now, the doctor says: "By the way, you also have these other 15 or 20 diseases in your family. You remember how your mother got migraines and you would get migraines? We can take those out." Would you do that?

JOANNE MYERS: I think you know the answer.

VIVEK WADHWA: All right. Now the question becomes: "You can actually add some IQ points to your child. Do you want 10 IQ points? Do you want 20 IQ points? Do you want 100 IQ points?"

JOANNE MYERS: I think we're going down a slippery slope. So the question is: What can you do to mitigate the risk but ensure that the benefits will not outweigh the risk of having a society where everybody looks alike, everybody thinks alike, everybody has super-intelligence? The robots will be taking over and we won't be human anymore. How do you ensure that we still retain some of our human qualities?

VIVEK WADHWA: The first thing is to learn that this is happening, and this is what the book is about. What I tried to do in The Driver in the Driverless Car was to educate people on the basics of technologies, and then to share my perspectives with them, saying: "Look, all of this stuff is happening. Let's learn it and let's come together and discuss what is good and what is bad."

On gene editing, I actually took a stand about two years ago. I wrote an article for The Washington Post in which I said we need a moratorium on human gene editing because I was terrified that we could be modifying the human germline, which means changing humanity itself.

JOANNE MYERS: Right.

VIVEK WADHWA: That article got so much criticism from the singularity community, the people who believe in life extension and so on, saying: "How dare you? You are going to now condemn children to have these horrible diseases?" I didn't say stop it, I said: "Let's slow down. Let's learn about it. Let's make sure that what we're doing isn't going to destroy humanity itself."

But these are the debates that we need to have. We need to start learning this stuff, and this is why I wrote the book.

JOANNE MYERS: Right. Well, isn't it curious then that you and Stephen Hawking, who does suffer from a terrible disease—he's a paraplegic—have raised the alarm that perhaps we shouldn't be going down this road because—

VIVEK WADHWA: Right.

JOANNE MYERS: —the recent breakthroughs in AI have made many people ask the moral questions whether or not any of this—like robots and whatever—can ever be a moral agent.

VIVEK WADHWA: That's the problem. On the one hand, if you happen to be the wrong color of skin and you get blood levels for a speeding ticket, it is more likely that you are going to be searched, and then it's more likely that you're going to be incarcerated. It's more likely that you're going to get the wrong side of justice. If we had artificial intelligence-based judges and if we had a more rational system, we would have less of that.

On the other hand, it is exactly what you said. This AI is not moral, it is simply looking at raw data and making judgments without having the human ability to really determine what's right and what's wrong.

The question then again is where you draw the line. Joanne, you are as smart as I am, if not smarter: You tell me the answer, please. I don't know the answer.

JOANNE MYERS: I think you're right to keep talking about it and raise the alarms, and probably having examples of things that people find distasteful or wanting to stop the progression of technology. It's a very difficult balancing act. You know more than I do. You're in it, you live it, you're a futurist, you see what's coming down. I guess the only thing that we can ask is that we all be aware of it and to see how far we want to go.

But let me ask you this, just going back to the robots: The biggest question—I guess we still have to institute some norms about what happens if a robot malfunctions and causes harm. Who is to blame in the end? All these people, these great scientists, developing gene editing, playing with our DNA, but when something goes so wrong, what happens then? Who is to blame? Is it society as a whole for allowing this to happen, or should it be an individual? Will there be insurance companies that will make money on this? What are your thoughts?

VIVEK WADHWA: You're asking all the right questions. We are going to face this—forget the robots, killing machines, and so on. Self-driving cars will be fully autonomous in the next two, three, or four years. Another question is: Who is liable if that car hits someone? Is it the driver? Is it the owner? Is it the software manufacturer? Is it the hardware manufacturer? We've got to figure this out.

There are discussions happening about this now in policy circles, but it's an important discussion to have. We have to decide product by product. If you ask me what my opinion is, my opinion is that the product manufacturers have to have good liability. But the tech industry is getting a free ride right now. They're not taking responsibility for their creations. They have to be more responsible, and we have to put them on the spot, and we have to be having these discussions and these debates.

The answer is: Let's start talking about it. Let's come to a consensus on what's important here. Again, I'm turning the tables back onto you because this is the type of discussion you folks need to be having because it's all about humanity itself. We need to have your discussions move up a notch so we can now start developing a consensus on what's good and what's bad.

JOANNE MYERS: Is there any institutional norm in place besides having a panel discussion here and there? Is there something wider on sort of an international level that people come together and talk about these issues so that the greater public can take advantage of your knowledge and your expertise?

VIVEK WADHWA: Joanne, you're the first think tank that has asked me to have this discussion, which shocks me.

JOANNE MYERS: It shocks me too.

VIVEK WADHWA: Yes, I know. Because this is one of the most important issues of our time, and yet the only interviews I've been doing have been with radio stations and with TV shows, not with the think tanks. Where are you people? Why is it only Carnegie that is asking these questions?

JOANNE MYERS: Well, we've always been ahead of the curve. This is evidence to the fact, right?

I thank you so much for being our window into the future and for raising such important questions and getting what we should be thinking about out there. You've given us the necessary guidelines by pointing out what we should be thinking about, about the benefits versus the risks versus how this technology strongly promotes autonomy and independence, and does technology have the potential to benefit everyone equally.

So, all those listening, I suggest that you go to Amazon, order the book The Driver in the Driverless Car: How Our Technology Choices Will Create the Future. And as Dr. Seuss once wrote long ago, "Oh, the places you'll go!" And it is a question of choices.

So, thank you, Vivek, so much for taking the time to talk to us this afternoon.

VIVEK WADHWA: Thank you for having me on, because you're raising all the right questions, and I thank you for that.

You may also like

APR 11, 2024 Podcast

The Ubiquity of An Aging Global Elite, with Jon Emont

"Wall Street Journal" reporter Jon Emont joins "The Doorstep" to discuss the systems and structures that keep aging leaders in power in autocracies and democracies.

APR 9, 2024 Video

Algorithms of War: The Use of AI in Armed Conflict

From Gaza to Ukraine, the military applications of AI are fundamentally reshaping the ethics of war. How should policymakers navigate AI’s inherent trade-offs?

MAR 28, 2024 Podcast

The Humanization of Warfare: Ethics, Law, and Civilians in Conflict

This panel explored emerging ethical and legal questions surrounding the humanization of warfare, touching on issues of international law, just war, and civilian protection.

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation