The Industries of the Future

Mar 10, 2016

TV Show


Driverless cars, designer babies, crypto currencies, cyber warfare, pervasive "sousveillance" that erodes our privacy, often with our consent--what are the upsides and downsides of this brave new world? Alec Ross, who is neither a utopian nor a dystopian, expertly guides us through it.


JOANNE MYERS: Good afternoon, everyone. I'm Joanne Myers, director of Public Affairs Programs, and on behalf of the Carnegie Council I'd like to thank you all for joining us.

Our guest today is Alec Ross, one of America's leading experts on innovation. The focus of our discussion is based on his book entitled The Industries of the Future, which explores the technological developments and economic trends that will affect countries, societies, and even you in the coming years.

Alec is currently a distinguished visiting fellow at Johns Hopkins University. Previously he was a convener of technology policy for Barack Obama's 2008 presidential campaign, and then served as Secretary of State Hillary Clinton's first ever senior advisor for innovation, working to bring statecraft into the 21st century.

The words "innovative" and "innovation" are bantered about in the media, in business, and in our everyday lives. While technology moves us forward, it also pushes aside existing soon-to-be-outdated technology, which presents new challenges. Just as the automobile put an end to the horse-and-buggy system and the telegraph disrupted the Pony Express, Alec talks about how robots, genomics, the codeification of money, cybersecurity, and big data will impact our world.

In the next 30 minutes or so, Alec and I will have a conversation about The Industries of the Future; then we will open the discussion so that you will be able to ask any questions that may be on your mind.

But before we begin, let me just say, Alec, how grateful we are for your being here today so that you can take us on a little journey into the future.

ALEC ROSS: Thank you for having me.


JOANNE MYERS: Alec, you've traveled the world. You've been to over 41 countries during your stint at the State Department and I'm sure many more, and you've seen many new technologies, some that are being used, some in the experimental stage. So could we just begin by asking you to talk a little bit about what you see as the industries of the future?


First of all, thank you all for coming out this evening. I very much appreciate Carnegie playing the role of convener, especially on these issues of technology and science as they relate to ethics.

Overwhelmingly, the writing being done from the perspective of Silicon Valley or about the technology and the science doesn't really scrutinize the ethical issues and the humanistic issues that relate to the scientific and technological developments. So one of the things that I've really tried to do in writing The Industries of the Future is bring that content to the fore so that as technology, as science, advance, they are scrutinized using our really human values.

As to what are the industries of the future, I'll just give a very, very brief gloss over what certain of them are.

Number one, artificial intelligence, machine learning, and robotics: I think that the robots of the cartoons and movies from the 1970s are going to be the reality of the 2020s, and I'm happy to go into more detail about that in a little while.

Another one is the commercialization of genomics. We are now about 15 years past the mapping of the human genome, and we are now right at the point where we are able to develop the kinds of personalized medicines and early diagnostics that have been long promised by the commercialization of this field. The world's last trillion-dollar industry was created out of computer code. The world's next trillion-dollar industry is going to be created out of the genetic code.

I also look at the rise of big data. Land was the raw material of the agricultural age. Iron was the raw material of the industrial age. Data is the raw material of the information age. He who owned the land and controlled the land during the agricultural age had the economic power and had the political power. He who owned the factories and controlled access to the natural resources during the industrial age had the economic power and the political power. He or she who owns the data, controls the data, and can draw meaning from the data during the information age are those with the economic and the political power. So examine big data in its full dimensions and try to look at how it interrelates to existing industries like agriculture or like foreign language learning and other such things.

Cybersecurity: We live today in a world of 16 billion Internet-connected devices. So today, in March of 2016, there are 16 billion devices that connect to the Internet. That is the sum of our computers with Internet connections, our mobile phones, our iPads, and the sensors that we have in the supply chain.

Four years from now, which is not that long from now, that number will have grown from 16 billion to 40 billion, as we are creating Pacific Oceans-worth of data, Pacific Oceans-worth of information. But as we go from a world with 16 billion Internet connections to 40 billion Internet connections and move more of our lives, like our electronic medical records, to the Cloud, the security risks go with that.

I think that the weaponization of code is the most significant development in conflict since the weaponization of fissile material. The difference being that creating a nuclear weapon requires access to the scarcest of scarce scientific talent in transuranium elements, whereas the creation of malware has a much lower barrier to entry.

I then examine what the attributes are for states and societies to compete and succeed in the industries of the future, and then conclude the book with a focus on what I call the most important job you'll ever have, which reflects my bias of being a parent.

JOANNE MYERS: It seems to me that there are some very serious ethical and moral issues that we should be thinking about in these new industries of the future. For example, last week Google had its first accident.

ALEC ROSS: The Google car.

JOANNE MYERS: While it may not be the first time that a Google car was involved in a crash, it may be the first time it caused one when it collided with a bus.

So this raises questions about who would be responsible in this particular case. I know that Google is going to go before the California Department of Motor Vehicles (DMV) and they will have some type of adjudication. But still, where does responsibility lie?

ALEC ROSS: Well I think responsibility here clearly lies with Google. Now having said that, when I think about autonomous vehicles, this is an area where I think the far more ethical thing to do would be actually to say we need to get humans out of the driver's seat. Every year 3 million people are killed annually in car accidents, and the accident that took place last week in the Google car was the first accident after 3 million miles of driving. So while there are significant improvements that will need to be made, I imagine, before we are all displaced from the driver's seat, I actually think that the best promise of autonomous vehicles or driverless vehicles, the biggest up-side, is actually the potential safety.

So I'm sure that we will create legal frameworks to determine liability and other such things. Right now, while it's the Google car and while Google is experimenting—you know, sending it up and down Highway 101—if it crashes into a bus, I'm sure that the liability will be theirs.

But the bigger picture, I think that going from a world where 3 million people are killed because of human error to a significantly smaller percentage because of the efficiency and effectiveness in this case of machines, I think if we can save lives that is ultimately the more ethical direction for us to move in.

JOANNE MYERS: But what if a robot is hacked by some nefarious force? This, I guess, leads to cyber warfare and whatever future things down the line that could happen.

ALEC ROSS: I think, in general, that if somebody hacks a robot, then that is a criminal act. And I think that in the same way in which machines can be used with malevolent intent—you know, a robot is just another kind of machine that can be used for good or for ill.

I think that the far likelier downsides in this case of robotics from the standpoint of our society is less a worry about some sort of recreation of Terminator and is much more—I think the bigger problem goes to the meat and potatoes of labor displacement. So I think that the real challenge that comes with robotics goes to the next wave of automation of labor.

I grew up in West Virginia. There's not an ounce of blue blood in this body. I helped put myself through college in part by working as a midnight janitor. The men who I worked with on the midnight shift at the Charleston Civic Center were folks who would have decades ago had jobs working in the mines or working in factories that had moved to India or Mexico.

When I think about robotics and when I think about the downsides of robotics, you know, I really don't worry about killer robots killing humans. What I worry about are killer robots killing jobs.


ALEC ROSS: So the last wave of automation that took place largely replaced the work of men with strong shoulders. The work was manual and routine; and it displaced the labor taking place in ports, factories, mills, and mines. With the combination of artificial intelligence with robotics, what this means is that the kind of labor that can be displaced is not merely manual and routine but it can also be cognitive and non-routine. And so what this means is that the nature of the labor displacement is going to grow into new and different forms of labor, and that honestly worries me a lot more than people hacking robots.

JOANNE MYERS: But new jobs will be created that we don't really know about right now.


JOANNE MYERS: Should the federal government in some way be involved in creating a safety net for those who will be losing their jobs?

ALEC ROSS: That's a really important question. I think that, first of all, there ought to already be a safety net. I think that most people would say that there is a safety net.

The question is: Looked at over a much longer horizon, if we believe that the labor economics will create a net permanent loss of jobs, the question then becomes, "All right, well what interventions do we make in the safety net?"

If we take a cold-blooded view of things right now, we have 4.9 percent unemployment in the United States. Now, the permanently displaced, those who are no longer following—let's say it doubles that number. That is still really good, and it's shockingly good relative to where we were in 2008 and 2009.

But I do think it's reasonable to sort of model out: If we are going to automate more kinds of labor, and if that 4.9 percent becomes 8.9 percent or 11.9 percent, then I do think we need to examine the safety net.

There's a lot of chatter in Silicon Valley right now about a universal basic income. I think that that is unlikely at this point for a variety of different reasons, not the least of which is that we can't afford a new entitlement program while we're still running up the kinds of deficits that we have.

But one thing that we can do in advance of any work examining our safety net is looking at the outputs of our education system and making sure the outputs of our education system map to the areas where we know job growth will be. You made the point that there will be job growth. So what we need to make sure is that in the education programs that serve many of the most vulnerable Americans, like vocational education, community college, these sorts of things—what I would do in advance of thinking about things like a universal basic income or other safety net programs, I would examine the education programs which serve the neediest Americans and make sure that their outputs really map to where the job growth will be.

JOANNE MYERS: Moving on, we talked earlier about privacy being so important and it brings up so many ethical issues in terms of how far does government interfere with our personal and private lives, and of course this leads to big data and what the servers and what Google and others—Facebook—are doing with the data accumulated by our being on the Internet. Could you speak to that for a little bit?

ALEC ROSS: Sure. I think that the impact of big data on privacy is extremely consequential. But let's break this into its constituent parts.

First, let's make a distinction between surveillance and sousveillance. Surveillance: people, watching you from on high, the National Security Agency (NSA), say. Sousveillance: those of use with video–enabled mobile phones watching each other. I am significantly more worried about sousveillance than I am about surveillance.

Having spent countless hours in the White House Situation Room and having been somebody who is on the receiving end of the intelligence reports that say what various people are doing, I can honestly say that why government surveils is to identify terrorists and other people who are trying to create harm. You know, they are not trying to find out who is sleeping with whom or who is cheating on their homework or anything like that. They are legally precluded from doing so. So I'm much less worried about surveillance from our own country than I am about sousveillance, than I am about all of the little digital fingerprints that we leave everywhere.

When I was growing up in West Virginia and I went out the backdoor into the woods to play with my friends in the morning, I was not sending or receiving any data. I lived a relatively data-free life. I didn't send or receive a single email until after I was out of college. I didn't own a mobile phone until I was 28 years old. By contrast, my children, from nearly the moment they awaken in the morning, are creating these little digital fingerprints.

So what I worry about is that in a world growing more transparent because we are all emitting so much data and because so much data is being captured from what will be 40 billion Internet-connected devices, what I worry about is the norms that we have around privacy today evaporating. I question the ability of this to be effectively regulated.

What I think is more likely to happen is that as our lives grow more transparent because of more information being captured, I have a feeling that societal norms will shift.

Let's think about drug use and presidential candidates. You know, when Bill Clinton ran for president in 1992, it was a very consequential question of whether he inhaled or not, did he inhale on a marijuana joint? Fast-forward 16 years, Barack Obama was like, "Oh, I inhaled. I inhaled a lot and I liked it. And oh, by the way, I did coke too." Non-issue in the campaign. So what shifted from 1992 to 2008? Norms shifted.

Think about homosexuality. When I was in college—which again, is not that long ago, 20-some years ago—homosexuality was still considered to be aberrant or scandalous behavior: "Hey, there's the gay guy." Today on any university campus it's overwhelmingly understood and accepted that some significant percentage of the student body is homosexual, and it is a non-issue. What has shifted from the time that I was in college to today? Norms shifted.

So what I imagine is that in a world with constantly eroding privacy, we will all have a scandal. Human fallibility will be in near constant view, and norms will shift. Should we just accept this? No, but I do think it's near inevitable.

JOANNE MYERS: But how do you draw the line then between public and private personas?

ALEC ROSS: Well, I think the problem is when you do draw a line. This is the problem. So when somebody lives a life publicly that does not map to the reality of their private life, then I think that the likelihood of that hypocrisy becoming public is ever likelier.

John F. Kennedy could not have had the sexual life that he had in the 1950s and early 1960s that he could have today, right? Why? Because of sousveillance. Not because of surveillance, because of sousveillance.

And so I think that when you make a distinction, when you represent one set of values publicly and you live another set of values privately, I think that you're eventually going to get caught. I think that these are the kinds of things that for decades and decades, if not centuries, were kept out of public view. I think we increasingly see what previously was thought to be private behavior coming into public view, whether that is good or whether that is bad.

JOANNE MYERS: So let me ask you, how do you feel about the Apple/San Bernardino case? Do you think that Apple is on the right side?

ALEC ROSS: So 90 percent of the time, maybe 80 percent of the time, I will side with law enforcement on these questions. In this case, I think the FBI (Federal Bureau of Investigation) really made a mistake.

The reason why I think Apple is in the right and the FBI is in the wrong is because what the FBI is calling for will not make us more safe; it will make us less safe. What the FBI is telling Apple to do is rebuild its internal operating system (iOS) software and to do so with deliberately lower security thresholds so that there is effectively a backdoor that the FBI can walk through.

Last night I did an event with David Petraeus, the former Central Intelligence Agency (CIA) director; and he and I, as well as the current secretary of defense, the head of the NSA, the current head of the CIA, all agree that if you build a backdoor, it's not just the FBI who's going to walk through the backdoor. It is going to be the Chinese government. It is going to be the Russian government. It's going to be a variety of non-state-based actors.

I think the FBI is exceedingly naïve. And I think that in his testimony before Congress, the FBI Director Jim Comey all but admitted his ignorance of what the implications would be on the international stage.

The other problem—and here's the ethical problem: If a mandatory backdoor is built in for access by the FBI, the United States is one of 196 sovereign nation states. So what happens in the other 195? Does this create a reciprocating set of obligations for them to do business elsewhere? I would think it would. But the problem is that not all of these other countries are protected by rights respecting rule of law.

Let's take the tradecraft out of it. Let's take the People's Liberation Army out of this. What if the Chinese just say to Apple: "Hey guys, we want the exact same deal that you gave the American government. We're a pretty big market, 1.3 billion potential consumers here." They'll probably have to build the backdoor. But, instead of just doing terrorism investigations, they will do political dissent investigations.

So I think the FBI really screwed up.

JOANNE MYERS: Another area you talk about in your book, which is fascinating and brings up many ethical issues, is genomics, designer babies. But also, with all this mapping of DNA and the genome, we could create biological weapons, which again would be used for nefarious purposes. So where do you come down in terms of these new developments?

ALEC ROSS: My book is neither utopian nor dystopian. Most people who write books about the future, they are either utopian—"Oh we're gonna live to be 150 years old, happy, healthy, wealthy, wise, lacking for nothing, lives of abundance"—or they're dystopian, you know, they're written from the fetal position with fists clenched. My book is a little bit more up the middle, and I think that life is neither utopian nor dystopian. It is net optimistic, but I think heavily weighs the consequences of these advances. I think the commercialization of genomics is an example of this.

Let's talk about the very positive and the very negative. On the very positive, my children, who are 9, 11, and 13 years old, I believe will have life expectancies three to five years longer than is currently projected because of a combination of early diagnostics made possible by genetic sequencing and precision medicines.

I'll just give one piece of color on this. I play racquetball. I was most instructed on this by a guy who I played racquetball with, who for years I thought was just a gym rat—has this sort of big gray beard, crazy gray hair; he wears a knee brace on the outside of his 1970s-style gray sweatpants; brings his racquetball gear to the court in a dingy old Samsonite suitcase. It turns out this guy is the world's most cited living scientist, Bert Vogelstein. It was his team at Johns Hopkins that in the 1980s discovered how mutations and proteins cause cancer. Kind of a big deal, right?

What Dr. Vogelstein's team at Johns Hopkins, as well as a variety of other research institutions, has created is a thing called a liquid biopsy. What this means is for those of you who, like me, get an annual checkup and get blood drawn to be able to tell like what your cholesterol level is, what they can do is they can sequence that genetic material and they can detect cancerous cells at 1/100th the size of what can be detected by an MRI. What this means is that cancers that we routinely today find in stages III and IV, we will find early in stage I when they are significantly more curable. That's the good story.

JOANNE MYERS: But you have to be able to afford to go to get that care.

ALEC ROSS: You absolutely do. And you know the progression for making that more affordable will take time.

On the downside though, as I was learning about this and getting very excited about the idea that my children will have years of added life expectancy, I said, "Well what's the downside of this?" It was explained to me—and I tell this story in the book—they said, "Designer babies."

What they said is right now when most expectant parents go to the doctor, they will be told, "Congratulations, it's a boy!" or "Congratulations, it's a girl!"

When my wife was pregnant with each of our three children, she had a genetic test, which is now routine, which also measures the probability of the child in utero having Down syndrome; and with that information you can make choices, for example about whether the child is brought to term.

What's interesting is what was explained to me by Dr. Luis Diaz—and I again share this in the book—is he goes, "Well now, when a mother is still in the first trimester, let's say 10 weeks along, we can tell her, 'Congratulations, it's a boy. He'll probably be between five-foot six and five-foot eight. He'll have curly brown hair. There's a 13 percent chance he'll have Parkinson's. There's an 11 percent chance he'll become an alcoholic. There's a 9 percent chance . . .' and then you keep going down the line." So now we're introducing a really interesting ethical question: What do you do with that information?

First of all, I think this will significantly impact parents' decisions about what child they bring to term. And when you add to this the development of technologies like clustered, regularly interspaced, short palindromic repeat (CRISPR) technology—let me ask how many of you have heard of CRISPR technology? [Show of hands] About a third of you. For those of you who aren't familiar with this, this basically allows us to do gene editing.

So imagine two scenarios:

Scenario one: A child in utero you see has a mutation that leads to Huntington's disease. You would probably have absolutely no problem with that DNA being repaired so that your child does not eventually get Huntington's. But what if using the same technology you say, "Huh, my son's going to be five-foot six. I sure would like for him to be maybe six-foot two." "Well, you say brown hair. I was hoping for blonde hair." I mean, what about these kinds of choices?

So it's really, really fascinating when you think about the advances in the science. Then, I think, what's necessary is begin to move past the obvious, sort of, "Kumbaya, oh yeah, we'll fix the genetic defect that causes Huntington's disease." But what about a parent who wants to add six inches of height to their son? So these for me are the really fascinating ethical issues that need to be fully explored.

What's interesting, too, is I think we have to have some humility as Americans, where what we decide is not necessarily what everybody is going to decide. Let's say we make it a law that you can only do genetic engineering where the health and physical wellbeing of an expectant child is clearly at risk and where the intervention is essentially lifesaving, or something like this. Just because we do it, what if Qatar or one of the Emirates or Singapore or—pick your country—creates an environment of total laissez-faire, so it becomes the center of genomics, as Switzerland was to banking, where sort of anything goes?

So what we decide in the United States, too, is not necessarily that which is going to hold. We aren't necessarily creating a normative structure that will be global.


QUESTION: James Starkman.

During the Cold War, we had a policy of mutually assured destruction, and it was effective. Nobody has dropped an A-bomb on anybody else for quite a long time.

In the area of cybersecurity, how would you advise Hillary, or anyone else in the White House? What should the policy be? What should the degree of use of cyber warfare be before the United States should retaliate in kind, or perhaps better than in kind?

ALEC ROSS: I don't know you, but I know you're very smart, because you went right to the heart of "The Weaponization of Code" chapter of my book.

What's interesting is, when you describe the doctrine of mutually assured destruction, what you're describing in essence is the framework that existed beginning in about the late 1950s to 1960s, and we then created a series a treaty structures to govern proliferation.

What's interesting about the cyber domain is that it is not as binary. It's not just USA versus USSR, where there can be a bilateral set of negotiations that establishes norms and frameworks. We're in sort of a norm–free zone. We're in a law-free zone. We are in a treaty-free zone. I think a mistake made by the U.S. government for the longest time was it assumed that it would benefit from the kind of asymmetry that the United States had in nuclear weapons in, say, the late 1940s.

We thought that we would be the sole cyber power and that we were inherently hostile to any kind of boundary-setting because we were so much stronger than everybody else. But what we learned was that the barriers to entry in cyber conflict are very low. And in fact, even though we might be the strongest in this domain, we in many respects have the most to lose.

So to your question, step one, I think, is to not rest on our laurels and accept our asymmetry, but recognize that it is actually worth doing deals multilaterally, to bring more states, including ourselves, into an understanding that the cyber domain cannot be a Wild West. There cannot be kinetic activity that goes unchecked. So right now, as we sit here, there is kinetic activity between the United States and China, and I think that it is quite consequential.

As to when I would ever advise to use a cyber weapon, I would say where doing so would save lives in a significant number and where the malware can be contained.

I'll give you an example. One time that was examined was in advance of the attack in Tripoli at the beginning of the NATO actions to remove Qaddafi. The question essentially was: If we bomb the air defenses, it'll create a body count of a certain number. Can we cyber-attack it, where basically we turn off the air defenses without anything blowing up and without anybody dying?

Now, a determination was made that it couldn't be done within the timeframe projected. But if I could push a button and keep buildings from blowing up and people from being killed, I would do it.

But here's the other aspect of this that I think needs to be understood: Unlike a bullet that after being shot cannot be re-shot; or a grenade, after you pull the pin, throw the grenade, and it goes off, you can't reconstitute the grenade and use it again; malware is different. A cyber weapon is different.

So it is alleged—I have to say it is alleged—that the cyber weapons developed that were in use against the Iranian nuclear facilities, there is evidence that in the code base of that malware that a subsequent cyber attack emanating from Iran directed against Saudi Aramco, the large Saudi oil company, actually had a digital signature that looked a lot like what was alleged to have been used against Iran.

So theoretically what this means is Iran, having been a victim of a cyber attack which allegedly crippled its capabilities in the nuclear domain, having now access to that code base, used it against the Saudis. Again, it's not like a bullet or like a grenade.

So for me, what this does is it makes it all the more necessary to create structures that contain the use of these weapons. Maybe only the world's most sophisticated nation-states can develop the weapons, but if they can be repurposed by states or non-state entities with significant lower capabilities, then I think the consequences are quite significant.

JOANNE MYERS: Do you think the danger to our society or our country is greater from a cyber attack or from a terrorist attack?

ALEC ROSS: That's a very good question. James Clapper, the director of national intelligence, was asked that question and he said cyber. I don't know.

I think that we constantly are victims of intellectual property theft coming from cyber attacks. The difference is that this doesn't create a body count. There's a difference between intellectual property being drained and a subway going black, or something like this. But I would argue that hundreds of billions of dollars' worth of intellectual property being extracted is consequential.

I am now three years out of government, so I don't read the intel reports that I once did, so I won't presume which presents the more imminent threat. But I will simply say that both are substantial.

QUESTION: Hi. Susan Gitelson.

You've mentioned the importance of robotics for the industries of the future and how there is increasing danger that robots will replace human beings in jobs, and it's all very nice on a theoretical level. But we're in the midst of a presidential primary situation which is surprising many people. And why? Because Trump has so much support from the very people we're talking about—people who have a high school education; who have had jobs and now can't get jobs, or if they can, they can't get well-paying jobs. This has enormous impact on our system, and the Republican leaders don't seem to know what to do about it. No one seems really to understand how to handle it. What would you suggest?

ALEC ROSS: Let me make sure I understand. How would I handle the economic issues related to displacement or how would I handle defeating Donald Trump? [Laughter] Which is the question?

QUESTIONER: Neither. The political implications—because this is actually happening in Europe and many countries—

ALEC ROSS: Yes it is.

QUESTIONER: —where people are so apprehensive and so angry that they are supporting extremist leaders.

ALEC ROSS: These are my people from West Virginia.

QUESTIONER: Exactly. So you know what it's all about.

ALEC ROSS: Donald Trump is going to win West Virginia, period. These are my people, so I understand perfectly. So what would I do about the political?


ALEC ROSS: Here's what I think. It's not the strongest who survive or the most intelligent, but those most adaptable to change. The anxiety that I believe is producing the support for Donald Trump is a product of anger coming from a fairly legitimate place.

I think that the only way to redress it is—this is not a sexy answer, but it's the only way I know how to do it—is to do the kind of structural reform in our education system that produces young people newly entering the workforce who are more resilient in the face of change. There is no shortcut. And the problem is that when these folks get older—you know, I'm 44 years old—we are harder to update than software. So look, taking somebody my age and older and saying, "Your jobs are no longer viable in our increasingly connected economy"—well, that gives me cold comfort.

You know, I'm fine. But the 44-year-old who I went to high school with who is still in West Virginia is not going to be fine with it, and if he's told, "Blame a brown person, blame a Muslim, blame this one or that one"—see, what Trump has done is he's done a brilliant job of saying who you can blame.

Let me bipartisan about this. So has Bernie Sanders. Bernie Sanders has said "blame the 1 percent, blame the bankers."

I think the far right and the far left have far more in common with each other than they do with the middle. They are both responding to legitimate anxiety, legitimate anger, but they are drawing different conclusions about which way they should go: the far right or the far left.

This is playing out throughout Europe, where you get parties like Syriza on the far left in Greece; you get Marine Le Pen and UKIP (UK Independence Party) on the far right.

The only way I think to deal with the rise of extremism, be it far right or far left, is to deal with the legitimate foundation of economic anxiety; and the only way to deal with the legitimate foundation of economic anxiety is to make sure that those people entering the workforce have skills that actually matter.

QUESTIONER: That's for the future. What about now?

ALEC ROSS: I mean there is no magic wand that can be waved over West Virginia. There's none. If there were, it would have been waved by now. There is absolutely nothing, no wand that can be waved over my native West Virginia, that is going to make people understand and accept their economic lot in life.

QUESTION: Don Simmons.

For the past 30 or 40 years, our country has seen its economic productivity, real value created per man-hour of work, growing at a significantly lesser rate than it did in the preceding 30-40 years. First question, do you think these measurements are accurate?

ALEC ROSS: You're talking about the Robert Gordon research?

QUESTIONER: Yes, among other things.

And then, the second question is: What about the next 30–40 years?

ALEC ROSS: I question the econometrics of a lot of some of what Gordon has recently published questioning productivity. I think that what is insufficiently accounted for is much of what IT (information technology) enables. I can't compete with him as an economist. But what I do think is that a lot of the assumptions that are baked into the econometrics—I think too much of the executive summary was written before the research. Let me put it that way.

In terms of productivity over the next 30 or 40 years, I think this is a question of man versus machine. So are we measuring human productivity or are we measuring corporate productivity. I think that corporate productivity will probably increase at a pace that is beyond what we would say is the human productivity. This is part of what is driving industrial robotics.

Given a choice between hiring a human and buying a machine, I think most of us would say, "I'll hire the human." But the reason why Foxconn, for example, which employs 973,000 people, is moving from having 500,000 robots to a million robots is entirely a question of corporate productivity. So Terry Gou, the CEO of Foxconn, basically assesses that there are significant limits to what can be drawn from a human. There are limits on increases in human productivity. There are fewer limits on the productivity that can be drawn from robotics, which can increase corporate productivity.

QUESTION: Hi. My name is Chris Janiec. I participate in the Carnegie New Leaders Program here.

Several times tonight—and I know I've read it elsewhere—you said part of your motivation in writing this book is to bridge the gap between elites who are aware of some of these forces and some of the broader public that's not.

I'd like to hear you speak a little bit about another subject that I know you focus on in the book, which is Bitcoin. This is straying a bit far from my wheelhouse, but I'm curious whether, even in the time since you've written the book, the attitude of institutional investors towards Bitcoin has changed in a way that maybe narrows that gap and the potential for regular people to see that again. My very basic understanding is that there has been a move to try and replicate some of the parts of Bitcoin that people find attractive while keeping it exclusive in a way that maybe takes away from, what I will assume will be part of your answer, what the promise is.

ALEC ROSS: Yes. Thank you for asking that very good question. First, let me make a distinction between Bitcoin and blockchain.

Bitcoin, for those of you who don't know, was a cryptocurrency that was released in the fall of 2008, the purpose of which essentially was to be a competitor currency to fiat currencies—and the word "fiat" in Latin means "it shall be." It was released by either a person or network of people known as Satoshi Nakamoto, who have never revealed their identities. The idea essentially was that fiat currency basically relies on the full faith and credit of the nation-state. In the fall of 2008, the full faith and credit of the nation-state as the guarantor of our economic wellbeing was sufficiently diminished. So Bitcoin was created.

Now, I think that Bitcoin has a number of structural flaws. Truthfully, it functioned less as a currency, less as a store of value and medium of exchange, than it did as a speculative asset. So it was much more like a stock you would bet on, with values going up and way down, than as a legitimate store of value or as a medium of exchange. I still to this day couldn't buy a cup of coffee with any practicality using Bitcoin.

But—big but here—there was a genius technological innovation within the creation of Bitcoin called a blockchain. What the blockchain basically is, it is a distributed cryptographic ledger system. Basically, it is a way of doing highly trusted transactions using crypto, so it's theoretically very secure.

What is interesting is now, even since I hit "send" on the manuscript, what I see is not so much mainstream, as in mainstream defined as middle American walking down main street in Kansas. I don't know that I've seen an increased recognition of mainstream consumer America understanding the application of blockchain technology.

What I have seen is mainstream financial institutions understanding the potential of blockchain technology within their operations. The example that I give is Lloyd Blankfein, the CEO of Goldman Sachs. I've had three conversations about blockchain with Lloyd Blankfein. Regardless of what any of you think about Goldman Sachs, these guys know how to make money.

First time I asked him about it, he goes, "Oh Bitcoin. Isn't that where people like buy drugs and stuff like that?" [Laughter]

Second time he said, "Oh, you know, some people insist it's worth our looking at, and I'll let them do that work as long as they do their other work too."

The last conversation I had with him, he goes, "Yeah, we're looking at the technology."

And then, in December, Goldman Sachs filed a patent for something called SETLcoin, which is a cryptocurrency leveraging blockchain technology. Now, Goldman Sachs is not trying to create a competitor currency to the dollar, the yen, the yuan, or the euro; but what it has figured out is it has figured out how, by creating a sort of walled garden where they can do asset settlements, stock sales, electronic transfers of funds, things like this, between, say, China and the United States, without those crazy fees and without so much falling off the back of the truck, that they can save an enormous amount of money.

Any of you who have ever been paid or tried to pay somebody transnationally know it is a pain in the neck. You know, to get paid $10,000 you're paying a $300 fee. What I think is that blockchain technology will take that $300 to transfer $10,000 and it will take it to three cents.

Taking the wellbeing of Goldman Sachs out of it, what I think is most positive is the implications for the developing world, where remittances are so important—you know, the Bangladeshi workers working as construction workers in the United Arab Emirates, the Mexicans working in restaurants here in New York and sending money back to Mexico—being able to draw the friction and the fees out of international payments I think will significantly benefit the developing world.

JOANNE MYERS: What about the negative about money laundering? How do you tax it? Those are other issues that come up because of this.

ALEC ROSS: I think it will reduce it.


ALEC ROSS: I do. I think mainstream financial institutions, more so than anybody else right now, don't want to facilitate nefarious—nothing could be worse for a publicly traded financial services institution than to facilitate criminal activity, because if they do, their stock prices are going to go down, their executives are going to get fired, they are going to get fined. I think we have seen changes. For example, if you look at the Swiss banks right now, the Swiss banks over the last several years have gone through this cleansing process, which has changed Swiss banking in its near entirety.

So I think that actually a cryptographic distributed ledger system, as administered by real banks, can bring more transparency and accountability to payments than existing systems.

QUESTION: Sondra Stein.

When you spoke about changing the education system so young people have the skills for today and the future that they need, let's assume they do that. I still suspect with artificial intelligence there won't be enough jobs. And we talked about a base income.

But another approach—I don't know if you think it's feasible—is to have more shared jobs. So people work but they have more leisure—jobs that one person does two people do. But I don't know if that's at all feasible in technology or the jobs being created.

ALEC ROSS: The biggest, the most public espouser of that view is a guy named Marc Andreessen, who is a very prominent venture capitalist in Silicon Valley. He created the web browser. What Marc's theory is—and boy, you talk about shifting norms; nothing could be a bigger shifting norm—what he believes in a world with more automation, where machines are doing more which is traditionally human labor, he believes it's going to shrink the workday. He believes that labor that historically would have been done during a 10-hour workday will be done by fewer people over fewer hours. And because there's so much consumer surplus created by work that is dull, dreary, or dangerous being done now by machines, he believes that it will actually enable humans to spend more of their life pursuing leisure, arts. Now this is a decidedly utopian view, but he's a pretty smart guy, and I think it is as reasonable to examine that view as it is to examine the dystopian view.

Again, my view tends to be a little bit more up the middle. But it is reasonable to think that in a world of increased automation that we ought to have more abundance.

The question is, in part, though, whether they have abundance or not, men between the ages of 16 and 32 who are unemployed or underemployed and have too much time on their hands tend not to make very positive use of that time. And so what I worry about is I think that there are aspects of human nature that contradict the economic theory here. I just hate idea of 19-year-olds out of work.

QUESTIONER: If I could just answer, what I meant was that, on the one hand, you have fewer people working, and let's just say we have a base income; but, on the other hand, if it's possible to share those jobs. So accepting what you're saying—hanging around isn't good, everyone needs to be involved—this way at least more people would work at least a half a day as opposed to not working. But I don't know if that's at all possible.

ALEC ROSS: It's worth examining. It's just difficult. Look, it's worth examining. I will reserve judgment.

QUESTION: This is fascinating, I must say.

First of all, with the Bitcoin business, I wonder when the government is going to have to come in and rescue the whole industry. All of these derivative kinds of things—it's not the same, but I'm just questioning all of these things that most of us don't understand so well, and I was in the business for a long time. I think the complications may prove not to be so great in the end. But that's not my point.

You spoke about more transparency for public persons so that their private lives are right out there and there will be less of a gap between the private and public. The other side of that is: Stuff gets on the Internet that's absolutely false, and I've seen it. For some people they can't get rid of it; it remains there forever. You can say whatever you feel like saying on the Internet, it gets published, and there it is. What's your answer to that.

ALEC ROSS: What I believe is that slander and libel laws that exist to the traditional printing press and speech ought to be applied equally online.

QUESTIONER: If I may say so, it's an individual. You're not going to hire a $100,000 law firm to bring your name back.

ALEC ROSS: And so the negative consequence of this is that we live in a society with free speech. This is the downside of living in a society with free speech.

There are legal means of remediation. They are our slander and libel laws. If you don't have the means or the inclination to hire a law firm to avail yourself of the legal process to punish somebody for libeling or slandering you, then you have to live with the implication of our living in a society that promotes free speech.

QUESTION: I'm Tyler Beebe. First of all, thank you for your very interesting comments.

Secondly, focusing in on cybersecurity, there's a lot of, as you know, fretting in fairly high places about the vulnerability of our electrical grid system to cyber attacks, and the observation is they would be fairly easy to hack into. Do you agree with that, that it would be all that simple? And are we doing halfway enough to build up buffers that would prevent such attacks?

ALEC ROSS: The answer is: No, I don't think it's that easy. What I do believe is that there are certain states that have the capability who we know have gotten inside various networks. But what they lack are the incentives to do anything.

Let's use China as a strawman, okay? Let's assume that they have significant cyber capabilities and that they could get into our grids. Right now, the Chinese government does not have any incentive to turn the lights out. If the lights get turned out in New York and the New York Stock Exchange and NASDAQ plummet 3-4 percent, you want to know who's a loser? China, because they are equity holders, they are stakeholders in our interconnected economy.

The biggest threat is the Russians. The reason why the Russians are the biggest threat is because in a post-Ukraine/post-sanctions world, the Russians increasingly do not feel like stakeholders in the collective wellbeing of our markets. And so what I believe is that the Russian government presents far and away the greatest threat to literally or proverbially turning the lights out. When I say the Russian government, I mean both the units inside the Russian government with the capability as well as the Nashi and other sort of state-affiliated hyper-nationalist hacker organizations aligned with Putin.

This is where the brinkmanship really—you know, it's sort of like a middle school playground, where you're testing each other. What I believe is that the Russians haven't really tested us here. I think part of why they haven't really tested us here is I think that they do believe that at least this administration would respond, and respond fairly ferociously, and I think they believe we would respond fairly ferociously because we've told them we would respond fairly ferociously.

What's interesting is I do believe that Obama has grown increasingly bellicose in the cyber domain. He is a different president on this today than when I came into government in 2009. Even turning the lights out on the Internet in North Korea, I believe, was coordinated between the United States and the Chinese government as a way of essentially sending a message to the North Koreans, "Don't do that again or we'll cut off your access to porn" [Laughter] which is what I really believe it is.

So I do think that the biggest threat here comes from the Russians, and that they have not yet tested it.

But I don't think it would be easy to do this. I think if it were easy, it would have been done, because there are enough non-state actors out there who want nothing but mayhem for us, that they would have done it.

QUESTIONER: Certainly the North Koreans don't have much stake in NASDAQ, so they wouldn't give a damn about that.

ALEC ROSS: That's right.

QUESTIONER: But on the other hand, we'd zap them back, I guess.

ALEC ROSS: Yes. Thank you.

JOANNE MYERS: Well, you certainly did not turn the lights off; you turned them on for all of us for the future. Thank you.

You may also like

MAY 23, 2024 Podcast

U.S. Election 2024 in a Post-Policy World, with Tom Nichols

"Atlantic" staff writer Tom Nichols returns to "The Doorstep" in its penultimate episode to discuss the lead-up to the 2024 U.S. presidential election.

ChatGPT homepage on a computer screen

MAY 15, 2024 Article

Forecasting Scenarios from the Use of AI in Diplomacy

Read through six scenarios and expert commentaries that explore potential impacts of AI on diplomacy.

MAY 15, 2024 Podcast

Beneficial AI: Moving Beyond Risks, with Raja Chatila

In this episode of the "AIEI" podcast, Senior Fellow Anja Kaspersen engages with Sorbonne University's Raja Chatila, exploring the integration of robotics, AI, and ethics.

Not translated

This content has not yet been translated into your language. You can request a translation by clicking the button below.

Request Translation