A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control

June 2, 2015

Introduction

JOANNE MYERS: Good morning. I'm Joanne Myers, and on behalf of the Carnegie Council, I would like to thank you all for joining us.

It is a pleasure to welcome Wendell Wallach to this Public Affairs program. Mr. Wallach is a celebrated ethicist who is renowned for his expertise on the social impact of emerging technologies. Today he will be discussing A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control. We are delighted that he has chosen the Carnegie Council to launch his book. For those of you who are interested, we will have copies of A Dangerous Master for you to purchase following the program today.

Since the dawn of civilization, mankind has been obsessed by the possibility that one day we could all be extinguished. Some scientists will tell you that asteroids hitting the earth will be the destroyer. Others caution that it will be a nuclear disaster. Then there are also other scientists who are increasingly of the view that even newer nightmares are lurking on the horizon, and that new fear is technology.

The concern is that transformative technology, whether genetic manipulation, autonomous robots, drones, or even 3D printers, could alter the human race and disrupt the structure of our society. While technological advances have tremendous potential for solving the world's most pressing problems, at the same time even the most beneficial discoveries can be misused and have undesirable side effects which could undermine institutions and time-honored values. This raises ethical concerns. For example, how do you program robots to know right from wrong? How do you balance benefits and risks, opportunities and hazards?

In Dangerous Master, our guest not only provides a foundation for a broader understanding of the emerging technological landscape, but in a world of unintended consequences wrought by human design, he raises a very important point: In striving to answer the question "Can we do this?" too few ask "Should we do this?" Accordingly, he invites us all to become active participants and consciously engage in addressing the challenges arising from the adoption of new technology.

Please join me in welcoming a very forward thinker, who will take us to the front of a new frontier, our guest today, Wendell Wallach. Thank you for joining us.

Remarks

WENDELL WALLACH: Thank you, Joanne, and thank you all for coming out so early this morning. I'm really moved.

I am truly honored to be here at the Carnegie Council today, and for the kick-off talk, because today is the official publication date of A Dangerous Master. This is truly a lovely way for me to get started to talk about this book.

The title comes from a quote from a 1920s Norwegian peace activist, Christian Lous Lange. He said, "Technology is a good servant, but a dangerous master." I have juxtaposed this with a more modern quote from Professor Irwin Corey that goes, "If we don't change directions soon, we'll end up where we're going."

In A Dangerous Master, I would like to suggest that where we are going is not necessarily where we want to be going and there is need for a course correction, a course correction so that technology does not become our master and, instead, stays our good servant.

If you were just to listen to the techno-optimists, you would think we were on a highway to heaven on earth and the buses are speeding up at an exponential rate. On the other hand, if you listen to the techno-pessimists, we are clearly going to hell in a handbasket.

But most of us perceive technology as a source of both promise and productivity, and yet there is considerable disquiet about specific technological developments and the overall course of technology. That disquiet can be seen in the worldwide prohibition on human cloning and human growth hormones in sports. The EU has its debate on genetically modified foods. The United States is debating embryonic stem cell research. Then there are these ongoing issues of biosecurity, infosecurity, and the toxicity of nanoparticles. More recently, I'm sure some of you have been reading about CRISPR [clustered regularly interspaced short palindromic repeats], a new technique for editing genes and calls coming out from various leaders—Elon Musk and Stephen Hawking—about the need for AI safety, the safety of artificial intelligence and that they can demonstrably be beneficial and controlled.

Self-driving cars, I believe, give us the metaphor for what we are dealing with. For some people, it's with a sense of inevitability. But what I think we see in self-driving cars is that technology is moving into the driver's seat as the primary determinant of humanity's future.

In this talk, I am going to start out with a prediction, I am going to give you some reasons for that prediction, and then I am going to turn to ways to address what I think is problematic here. What you will hear will only be a tiny slice of what is covered in the book, which I have tried to make a somewhat comprehensive overview and introduction to the emerging technologies. Many of you know what some of these sciences are about, but I would suspect there are very few in this room who know what all of these sciences are about. So I want to provide a friendly introduction and, at the same time, give people a sense of what the issues are so that they can join in the conversation and maybe address some of those issues. We are dealing with everything from genomics to synthetic biology, nanotech to geoengineering, AI to augmented reality.

Here's my prediction: Social disruptions, public health and economic crises, environmental damage, and personal tragedies, all made possible by the adoption of new technologies, will increase dramatically over the next 20 years. I am not making this prediction to precipitate fear. I am actually making the prediction in the hopes that it will encourage the kinds of actions that will prove me wrong. But unexpected disasters do occur when we fail to address the challenges inherent in managing new technologies and complex systems.

From thalidomide babies to the explosion of the chemical factory in Bhopal, India, and from meltdowns at Chernobyl and Fukushima to the Gulf oil spill, and on to the derivative crisis, technology has been complicit in so many tragedies that we have had over the past decades. My reasons for thinking that there will be an increase in such tragedies is that we have an increasing reliance on complex systems that we little understand and that are on occasion quite unpredictable in the way we act. Furthermore, there is an accelerating pace of innovation in a climate where there is a lack of effective ethical and governance oversight. Finally, there is a plethora of different harms associated with specific technologies.

First, the "c" word, complexity. Complex adaptive systems are everywhere. Every one of us, from a scientific perspective, every organism, is a complex adaptive system. Financial markets are complex adaptive systems. Complex adaptive systems can on occasion behave unpredictably. They have, under pressure, tipping points, where they start to self-reorganize into different forms, and they have these emergent properties. The very fact that you are conscious and sitting here, aware that you are in this lovely room and engaged with me over some very complex ideas is just evidence of the fact that we are conscious beings, and yet none of the nerve cells that help us be conscious are themselves conscious. So that is an emergent property.

As I said, complex systems are poorly understood. Some people believe they are difficult, if not impossible, to fully manage. I think most of us think they are difficult. Some people believe they are impossible to fully manage.

Furthermore, we live in a world of systems within systems within systems. There are these countless feedback loops between one system and another system.

There are five reasons why complex systems fail:

  • One is incompetence or wrongdoing. The BP oil spill would be a good example of that.

  • Another is design flaws or vulnerabilities. There were design flaws in the nuclear plants at Chernobyl.

  • Then there are what Charles Perrow named normal accidents. Normal accidents are when something goes wrong, but nobody has done anything wrong. Three Mile Island is a good example of a normal accident. It happened because three different components failed simultaneously, and while the engineers looked at backup devices for the failure of any one component or each of those three components, they had never considered that combination. Furthermore, to have considered that combination, they would have had to consider countless other combinations, perhaps as many combinations as the sands on the beaches of our country. They could not really have looked at all exigencies.

  • Then there are disasters that happen because we underestimate the risks and fail to plan for low-probability events. But low-probability events do occur, particularly in a high-speed world driven by computers, where thousands and even millions of events can happen in a second or even a millisecond.

  • Furthermore, there are black swans, unforeseen low-probability but high-impact events.

The derivative crisis was probably attributable to both of those reasons. It was, to some extent, an unforeseen crisis, but to another extent, financiers could have looked at what would happen if the real estate market did start to melt down.

While reading and reviewing the literature about why Three Mile Island occurred and also why the Challenger exploded, Malcolm Gladwell concluded "We have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life."

My second reason again was about the accelerating pace of discovery and innovation. Some would claim that it is accelerating at an exponential rate, meaning it is doubling constantly, and it is moving in the direction of a technological singularity, often represented by a time at which computers will equal human intelligence and then, in an intelligence explosion, far exceed our intelligence to become superintelligent. The scholarly community tends to be rather skeptical about some of these more speculative possibilities, recognizing a history of unfulfilled predictions and also that the complexity will thwart easy progress toward some of the goals that have been talked about. But the actual speed of acceleration is the central issue for determining the adequacy of existing oversight mechanisms.

I'm your friendly skeptic. I'm friendly to the can-do engineering spirit that says remarkable technological benefits lie in our future. But I am deeply skeptical that we know enough about intelligence or many other things being discussed to fully realize beings that have the equivalent of, let's say, human intelligence, let alone far exceed that. Nevertheless, there is an accelerating rate of technological development, whether exponential or not, and there is a growing gap between the emerging technologies and their legal/ethical oversight.

Then there is the plethora of different things that can go wrong with individual technologies. Again, when I am talking about what can go wrong, I am not saying there aren't benefits. Nearly all of these technologies are being developed because it is perceived that the benefits far outweigh the risks, or it is believed that that will be the case. But there are dangers. I just want to stress the need to reflect upon and manage those dangers.

There might be designer pathogens that some bad actor or some unsuspecting youngster creates in their own laboratory. We are moving in the direction of do-it-yourself biology. There are biosafety and bioterrorism concerns, cyber conflicts. There are issues around justice and fairness in the distribution of new technologies.

We are in the midst of a veritable tech storm, an outpouring of scientific discovery and technological development. While rain showers can be very nurturing for spring flowers, an incessant outpouring can be quite destructive.

Can we manage technological development? After all, when you look at history, all technologies developed in very uncertain ways, and we can't always predict what the benefits will be. We don't want to interfere with some of the more beneficial developments.

David Collingridge, in 1980, noted that it is easiest to control development early on in any technology, but by the time the undesirable consequences are discovered, the technology is often so entrenched in the economic and social fabric that its control is extremely difficult. This has been known as the Collingridge dilemma, and it has stymied oversight of technology for many years. But I reject its simplistic binary logic—either you can't do it because you don't know what the problems are or it's too late to manage the technology. In the development of every technology, there are inflection points, there are windows, there are opportunities in which we can intervene and change the course of development. A small change in the course of development of a technology can actually over time lead us to a radically different destination.

Here are a couple of examples when action was taken on inflection points. Over the past decade, world health authorities were confronted with two different forms of flu outbreaks—one, swine flu, and the other, bird flu—and they were afraid they would mutate into a form that would lead to a global pandemic, perhaps on the order of the Spanish pandemic of 1918, which killed 3 to 5 percent of the world's population. Therefore, world health authorities took major strides toward communication, toward putting infrastructure in place to catch a pandemic early and hopefully find ways of mitigating or addressing it.

That was one inflection point. We have not necessarily stopped absolutely the chance of having something like the Spanish flu pandemic, but we have lowered that possibility significantly and we have raised the ability to address pandemics that may occur over the next decades.

Another example is in genomics. Genomics itself represents a radical inflection point in human history that will take us in all kinds of directions that we don't fully understand. But two years ago, in 2013, the U.S. Supreme Court had to make a decision about whether human genes were patentable. At that time it was believed that 44 percent of the genes in the human body actually had already been patented. The Supreme Court decided, no, human genes could not be patented. Through that one action, they altered the course of genomic history.

But there are inflection points that we are just beginning to witness where we have some opportunity to address challenges on the horizon. There is technological unemployment, which is the downward pressure on wages and job creation by the introduction of new technologies—largely, increasingly intelligent robots, but also life extension, where people are living longer and retiring later. We still have other technologies, such as a desktop publishing form of 3D printing, which will actually disrupt many industries.

Another area of concern is the robotization of warfare. Is this a good idea or a recipe for future disasters? Many of you may be aware that there is already an international campaign to ban lethal autonomous robots. I am actually very much a part of that campaign, though I question the present course, which is directed at an international arms-control agreement through the United Nations Committee on Certain Conventional Weapons, the CCW. That is known as the "ban killer robots" campaign.

I don't think that traditional forms of arms control are really a good method to control robotic weaponry. To get the agreements in place, let alone inspection regimes, would be really difficult, and inspections wouldn't mean very much when the difference between being an autonomous weapon and a non-autonomous may be merely a question of a few lines of code or putting a switch into the system after the inspectors have left the location.

I propose a different way of proceeding, largely through the human rights councils. We should put in place a high-order moral principle stating that machines must not make decisions that result in the deaths of humans. Here the question is of machines that could both pick their own targets and dispatch their targets without a human directly in the loop. That's what the concern is. I propose that that kind of action be treated as malum in se. Malum in se is an ancient Roman term standing for "evil in itself."  We consider rape evil in itself; and slavery, which for thousands of years was accepted, is now considered a form of evil in itself. So it's not like we can't come up with new principles for what is and what is not evil.

Even before the killer robot campaign was set in motion, I suggested that we needed an executive order from the president of the United States declaring that lethal autonomous robots violate existing international humanitarian law.

But robots being used for warfare are only the tip of a much larger iceberg. That iceberg is increasingly autonomous forms of artificial intelligence being introduced into every walk of life. Presently, in principle, if things go wrong, the liability and culpability is the responsibility of the companies that produce these devices. But we are living in an environment where we are diluting responsibility in so many walks of life. I am concerned that in the long run, increasingly autonomous robots are going to threaten to undermine the foundational principle that human agents, either individual or corporate, are responsible and potentially accountable and liable for the harms caused by the deployment of any technology. We need to reinforce the principle of responsibility.

How should we address the broader array of challenges posed by emerging technologies? First of all, we can challenge some of the assumptions that are drivers of the storm. For techno-enthusiasts, there is a technological solution to every problem you can suggest. Whether those technological solutions are realizable is another question.

But here is an example of techno-solutions that I think we have all been hearing for the last 30 or 40 years: Technology will lower health-care costs. Yet health-care costs, as we all know, are growing at 6 to 8 percent a year. Most analysts attribute at least 50 percent of that to the development of new medical technologies or the dissemination of existing medical technologies. Every hospital has to have a robotic surgery system now.

The health-care budget in the United States is $3.05 trillion. By itself, that constitutes the fifth-largest economy in the world, larger than the economy of France and only a half trillion dollars less than the economic powerhouse, Germany. Furthermore, in the United States we get less bang for our health-care buck than many other countries. Presently 17 percent of gross domestic product goes to health care, and it is growing and will be 20 percent by the end of the decade. This is an untenable situation.

We can begin to alter our engineering practices so that we build values into the kinds of systems we are producing. The National Academies have recommended that research on the ethical, legal, and societal impact [ELSI] of technologies should be funded right along with the technologies themselves. We did see that to a small extent in the National Nanotech Initiative—0.3 of 1 percent of the budget for the NNI did go to ELSI research.

Engineers can build various technologies into the systems they create to make sure that they could be hindered if something goes wrong. A few years ago, Monsanto bought a company that had produced a suicide gene, basically ensuring that that plant could not reproduce. Monsanto was more or less crucified for having this gene, and they swore they would never deploy it. But this is something that perhaps we should visit again, because a seed that can't reproduce will not cross-fertilize, or it will be difficult for it to cross-fertilize, with other plants.

In synthetic biology, George Church, one of its leaders and one of the developers of some of its leading-edge research, has been very responsible in also trying to come up with methods for taming the very possibilities he is opening the door to with other research trajectories. He has suggested that we produce some synthetic organisms—these are often just single-cell organisms, or these might even be just biological materials—that we tame their impact by building them through DNA that is different than human DNA and that functions through different metabolic pathways than present organic genes function. These are responsible schemes. He has also suggested many for other biological technologies.

When we build little nanomachines, which we aren't building yet—most of the nanotechnologies, this very microscopic form of engineering, are new materials, but there is a hope that eventually we will have little nanomachines—one of the poster children for what can go wrong is called "grey goo." The idea is that tiny little nanomachines that were reproducing might gobble up all the carbon-based matter on the planet just reproducing themselves until there was nothing more than a 3-foot-thick sludge of grey goo. One way we might proceed is to build a component into every nanomachine that could be dissolved with a known chemical reaction.

But, of course, solutions like this will only work with responsible scientists. There are always bad actors out there. We can't ensure what they will do with these new, powerful tools.

Another suggestion is that engineers should consider responsibility as a design component when they build new systems, in the same way that they presently look at certain components to make sure they don't overheat. If they looked at responsibility—who would be responsible if the system failed—as part of the design specifications when they were building the systems, that might actually direct them to build their systems on very different platforms than they might otherwise, platforms that we have a bit more control over, or at least that corporations would be willing to accept responsibility for the devices.

We could embed ethicists and social theorists into design teams, not as naysayers, but as active participants in the engineering process sensitive to certain societal considerations that other engineers might not be sensitive to. Then we could explore building moral decision-making faculties, sensitivity to moral considerations, and the ability to align their values with those of humans within computational systems, what I referred to in my last book as moral machines. This has suddenly become a topic of great interest, as there has been a breakthrough recently called deep learning in artificial intelligence that has raised alarms within the artificial intelligence community.

Gary Marchant, who is a professor of law and director of the Center for Law and Innovation at the Sandra Day O'Connor School of Law at Arizona State University, and I have made a proposal for what we call governance coordinating committees [GCC]. We are trying to suggest an alternative form of oversight to overly bureaucratic and cumbersome laws and regulations. Our idea is that we put in place a group that functions as issues manager and orchestrator to coordinate the activities of the various stakeholders. It would be comprehensive in its monitoring. It would flag issues and gaps, and try to find solutions within a robust set of available mechanisms. It would be mandated to avoid regulations wherever possible and to favor soft governance.

Soft governance refers to industry standards, professional codes of conduct, insurance policies—a plethora of different guidelines that can help in the oversight of emerging technology. The weakness of soft governance is that it is harder to enforce. So in some areas you still need regulations and law—hard governance. But the idea here is to have a vehicle that is nimble, flexible, adaptive, and lean.

There, of course, would be implementation considerations, many implementation challenges for how you would go about creating governance coordinating commissions. For example, from whom would they get their authority and legitimacy? Could they have adequate influence? How would we pick their members and administrators? How would they establish their credibility? Should they be government or private enterprises? There are advantages of both. They would have more clout if they were in the government. On the other hand, they would be more insulated from political pressures if they were private. And who would fund a GCC? To whom would they be accountable?

All of these considerations, in the present political climate, might make one think that GCCs are just too complicated or hopelessly naïve or perhaps both. Yet something like this is needed. We have solved difficult problems in the past when we have needed to.

Gary and I have proposed that we begin a pilot project with either or both, either artificial intelligence and robotics, or synthetic biology. These are both emerging technologies that at this stage are relatively unencumbered by oversight and regulation.

International coordination—one of the biggies. One would hope we could harmonize our policies with those of other countries. But even with our neighbor Canada, there have been some difficulties. But strides have been made in that direction. The best we can hope for is coordination, if not full harmonization. GCCs and their counterparts in other countries could function as a spearhead for that kind of coordination. But throughout the world, we have differing values, and corporations are free to move to countries where there are not cumbersome regulations to conduct research that, for example, they may not be able to do here in the United States.

Just take one example, this new technology that would radically speed up the ability to edit genes, known as CRISPR. Some of you have been perhaps reading about this over the last few months. There have been calls for a moratorium on using CRISPR to edit, to alter human genetic material. But immediately following those calls for a moratorium, China announced that it had already begun this, working on liver tissue to cure a liver disease.

Then there is the fact that countries differ very strongly on the degree of precaution they would like to see put in place. In the European Union they have codified a precautionary principle which, in effect, says that if there is any perceived danger, the developers have to prove that they can manage the dangers before they can go ahead with the research. It is largely the exact opposite of what we do in the United States. We wait for something to go wrong, because we would like to stimulate innovation, we would like to stimulate productivity, and we would like to get the early gains you get from being first in what could be a transformative technology.

These issues will have to be bridged. But the fact is even the precautionary principle is applied very unevenly in Europe, and there are many areas in which we apply a precautionary principle. There are ways to work through those issues.

There are many shared concerns. One of the biggies is geoengineering. Geoengineering is the application of technological techniques to mitigate the effects of global climate change. Some of these techniques look like they could potentially be as dangerous as the very problem they intend to solve, dangerous to climate patterns, but also politically dangerous in how they might be applied by different countries. Recognizing this concern, the United Nations is already meeting to formulate guidelines for geoengineering experimentation.

But in all of this, it is important to keep in mind that technological development can both stagnate and overheat. I have perhaps been emphasizing the overheating. But where it stagnates or where benefits can truly be realized, we want to stimulate development. A central role for public policy, law, and ethics is to modulate the rate of development and the deployment of emerging technologies. If technological development, however, is truly accelerating, the need for foresight and planning is pressing.

Thank you very much.

Questions

QUESTION: Ron Berenbeim.

One question that I think has got us all concerned is the potential for technology to accelerate the wealth and income gap, both inside countries and globally. What is the best way to manage and deal with that?

WENDELL WALLACH: That's a biggie. I just waved to it as I went through my presentation in the name of technological unemployment. In fact, I will be in Canada tomorrow debating with Nick Bostrom about technology unemployment at the University of Ottawa.

That is a very difficult thing, because it is something that could be exacerbated by technology, but it is not just a technology problem. It is very much also a problem inherent in the anomalies of capitalism, for example.

For most of our history, 50 percent of GDP went to wages and 50 percent went to capital. We are seeing a radical alteration in that, largely because of the anomalies of money being made from high-tech industries. They just function very differently than old industries. In 1990, GM, Ford, and Chrysler, which were the Big Three then, brought in, I think, $36 billion in profit and they hired over 1 million workers. The Big Three today—Apple, Facebook, and Google—bring in over $1 trillion in profit and they only hire 137,000 workers. That is not anybody doing anything wrong. It's just that technology industries are different than old manufacturing industries.

But it has created the situation where there is a tip of more and more capital going to the owners of capital. Furthermore, by robotizing jobs, you contribute to that, because all the productivity from the labor produced by robots goes to capital. 

So this is a major issue. I don't think it is an issue that is going to be solved technologically. I think, in a certain sense, to blame it on technology is to miss the larger problem, which is that we have a serious distribution crisis. It is already evident, and it is only going to get worse as more and more jobs get turned over to technology. By some estimates, 47 percent of the present jobs in the United States could be computerized. That's 47 percent. Furthermore, we are already in a climate where research has shown that if confronted with a $2,000 unexpected expense in a month, 50 percent of Americans cannot find a way to come up with that $2,000. 

This is an untenable situation and one that I think could actually lead to major social disruptions, once the public starts to catch on that we are truly in the midst of technological unemployment.

Now, I know there are arguments that say we aren't. Technological unemployment is like the long-held Luddite fear that technology will rob more jobs than it creates. Yet for 200 years, that hasn't been the case. It appears to be different this time around. Yet the full impact of it has not been recognized.

There are policies—but they are more in the realm of political economy than technology—for addressing this distribution crisis before it hits a crucial point. I am not sure that our society at this moment would take any of the conceivable actions, from guaranteed work to guaranteed minimum wage. There are many different arrangements that are being proposed. I don't know what the correct solution is, but I think to pretend that there isn't a distribution crisis is to walk through this world with our eyes closed.

QUESTION: Susan Gitelson.

Your presentation was absolutely extraordinary and helped us deal with many different subjects. Let's talk about robotic warfare. You wrote a book saying how to teach robots right from wrong. Robotic warfare suggests that somebody 10,000 miles away from a target can pinpoint whatever and not really take account of collateral damage. The robot isn't exactly responsible, but somebody must be concerned about distinguishing right from wrong.

WENDELL WALLACH: In the present case, we are talking largely about the drones that have already been deployed. The operations can be taking place in Las Vegas or other parts of the world. There are human decision-makers in that action. In fact, there are often sitting behind the robotic operators whole teams of both lawyers and political leaders, who finally make a decision that that target can be struck.

This is a very difficult thing in that there has been collateral damage. But the broader issue is, is there more collateral damage in this form of warfare than in other forms of warfare? I personally am of the feeling that the discrimination that this does allow, and the evidence I have seen suggesting some of the care that at least been taken in recent years, if not in the early years, would suggest that this is a lot better than just directing guided missiles into territories where we don't even know what's going on. The bigger problem with drone warfare is that we are violating international humanitarian law by sending drones into regions that we are not at war with. That is the immediate concern.

The future concern is more toward allowing increasing autonomy with these machines themselves to pick their own targets and dispatch people. At the far end of that are the Terminator nightmare scenarios, with machines that want to eliminate all of us. I don't know about that kind of science fiction. I don't know if it is helpful, but I think it does at least underscore an intuition that this is not a good road for us to go down. I think it's very important for us to find ways to ensure that there is meaningful human control.

A colleague of mine, Ron Arkin, at Georgia Tech University, claims that he can implement the capacity to follow the laws of armed conflict—these are the internationally agreed-upon principles for fighting a just war—he can implement the laws of armed conflict and the terms of engagement into robots so they will be more responsible than human soldiers. He says that actually this is pretty low-hanging fruit. He bases that on research from the Surgeon General's Office that shows that soldiers are not very good at following the laws of armed conflict. For example, a majority will not squeal on a buddy even if he has committed a war atrocity.

I think, though, Ron is being a little naïve and that those kinds of robots could only be deployed in very, very tightly constrained circumstances. We don't know how to ensure that robots will have the discrimination to distinguish between what is a combatant and what is a noncombatant. If a terrorist is dressed as a villager, we aren't so good at that either. But we do bring powers to bear—awareness, discrimination—that we don't even know how to implement in artificial intelligence at this stage of the game. Yet it seems like our government is intent on maintaining the right to produce such systems.

I am hoping that we will at least get some kind of international restraint on when robots can and cannot be used in warfare, at least the kind of restraint where they are under meaningful human control.

QUESTION: Jared Silberman, Columbia Bioethics.

I want to bring in the other side on the issue of roboticism and warfare, and link it to the Germanwings incident and the recent Amtrak incident. Isn't there a cry-out that we need the robot to control situations because of human frailties or maverick criminal behavior or whatever is determined—that we should have that so that tragedies like that cannot occur? I would like your comments on that, that those accidents could have been avoided if there would have been robotic control, when it realizes that the human is not there.

WENDELL WALLACH: I think it's clear that that is true in many situations, that turning over certain kinds of tasks to robots, or at least having them monitor when the humans are functioning properly, would be a good thing. The question is whether we want to turn these tasks totally over to robots.

I am more in favor of the kind of situation we have in aircraft, where basically robots or intelligent systems are flying our planes, but there is still presumably a well-trained human to make the final decision. That is where my concern lies, not that we don't utilize technology, the benefits it can bring to us, but we do it in partnership with intelligent humans and put aside this argument that wise robots are going to make decisions better than dumb humans.

QUESTION: Bob Perlman.

A quick question. Given the hysteria over GMO [genentically modified organism] seeds, despite safety studies, talk please, if you could, about their benefits in drought-stricken areas in Africa and the West Coast of the United States, that risk-reward balance.

WENDELL WALLACH: It is clear that in the United States we do embrace the benefits of GMOs, where our counterparts in Europe are concerned about the downside of that. Both sides don't really understand each other very well. We treat the Europeans as if they are just stupid because so much research shows that GMOs are good, and they think we are blind to many factors that are taking place. But GMOs, synthetic biology—they are going to engineer different kinds of organisms, and some of those organisms will fare in climates where nothing else can right now. A friend of mine realized that he could make crops grow many years ago in very arid climates. If we are trying to ameliorate starvation in the world, then we certainly want crops like that.

I don't know how many of you remember the Club of Rome that was raising alarm bells way back in the 1960s that overpopulation was about to lead to some Malthusian nightmare of starvation. Yet they hadn't perceived that there were going to be miracle grains on the horizon, which have now allowed us to sustain close to 8 billion people on this planet. That is pretty remarkable. I think we do want to save people from starvation wherever we can.

On the other hand, when you start to look at the larger picture, we now have 8 billion people on this planet, all of whom would like to drive BMWs. The pressure on energy needs is going to triple from the rising billions in coming decades. Think of how difficult it is for us to meet the energy needs that we have today, let alone that we are going to have to reduce dramatically our reliance on carbon-based energy production due to global climate change. This is not an easy challenge and certainly not one we are really addressing today.

QUESTION: Carol Perlman.

The link that you have between technology and the escalation of the health-care dollar or spend, I want to better understand that. If you take a look at technology, for example, diagnostic imaging—and I refer to that as the first step in the health-care continuum, because without early, accurate diagnosis, whether it is through CT scans, MRI, PET scans—those are vital tools in a physician's toolbox in order to understand how to best treat that patient, the best drug for the best patient, with the best possible outcome. So I want to understand that.

Also, to me, demographics really play a significant role in the health-care dollar, and the fact that we are living longer, for a variety of wonderful reasons.

WENDELL WALLACH: I am very pro-medical technologies to improve health care. I am certainly not trying to be a Luddite here, an anti-technology guy, or say we don't want to produce better medical technologies. I am just trying to underscore that we truly have a crisis in the cost of health care and we aren't using our health-care dollars well. It is not clear that every hospital needs an MRI. That would just be an example of a place where cost could be lowered. We are developing greater and greater imaging capabilities. Every hospital and every patient is going to want that also, even if they are a hypochondriac.

When you get into the details of managing health-care costs, I think we all know that that is truly one of the swamps that we don't know how to get into and get out of. But it is also clear that we aren't addressing it. That is all I really want to underscore here, not that we don't want better medical technology.

Bur we aren't spending our health-care budget well. If you look at comparisons of how we spend our health-care dollar versus, let's say, comparable countries like Canada and Switzerland, you see radical differences between what we get for the same amount of money and what they get.

I know there are arguments about whether our health-care system is really better or not. I think if you really went into the details, you would see that for the great health care most of us in this room get, when you start looking across the population, some of these other societies are doing a better job. We could learn from them. We could learn from their experimentation with new methods.

QUESTION: Good morning. Philip Schlussel.

You indicated that one of the great problems was morality. Who would you suggest could appoint or elect the arbiters of morality whose decisions would be respected here in this part of the world and the entire world and would have a decent enough following?

WENDELL WALLACH: I wouldn't mind if the people in this room selected those. I think there is a lot of wisdom in this room and there are a lot of people who perhaps have reached a stage where they are semi-retired or retired, but have gained a great deal of credibility in the society. And I do wish we would turn to them and start looking for who the good-faith brokers are in our society. It seems that being a good-faith broker isn't worth very much in modern America. If you write a book, it is important to be sensational. The more extreme your claims are, the more sensational you are. If you write a balanced book, it is pretty iffy whether anybody is going to read it.

So there is a need for, let's say, a shift in our social morality, perhaps to value virtue in a different way—and I don't mean the way virtue is being appropriated by some political parties in America—but to value virtue, to value wisdom, to recognize that we do need good-faith brokers, and to turn to those who don't necessarily have so much of a vested interest in getting benefits for themselves to perhaps help us at least get some direction in what should actually be embraced and what perhaps we should take a second look at.

QUESTION: Thank you. William Verdone.

I would like your impressions on the ongoing investigation and excitement toward biology and genomics and so forth, especially from the religious community, who may have another impression.

WENDELL WALLACH: This is a difficult one here in America, isn't it? I think we are recognizing that we have truly become two countries, with totally different philosophies of life, totally different ways of looking at things. We seem to be, again, pretty good at representing each other as just stupid. Isn't that how our political dialogue is proceeding?

But I also recognize that we are moving into an international world, and those who have more parochial, more community-based values are at a disadvantage. These technologies are going to be developed presuming that they are beneficial somewhere, and perhaps even if they aren't beneficial. I don't think there is an easy way for us out of this, but I think we need to understand how each other thinks a little bit better. That probably starts with us trying to understand why others are really thinking these ways that I see as anti-scientific, irrational, and on and on and on and on. I don't believe we will find ways of communicating with all of those who disagree with us. I think we can perhaps find some more effective ways if we at least come to appreciate the intuitions that are moving them to express what they are expressing.

Does this mean we are going to stop tinkering with the genome because some people are offended and think that is a violation of a god-given principle? Probably not. But we can ameliorate some of this.

During the stem cell research, it was very fascinating. As I think most of you understand, the stem cell debate was largely about whether we were killing unborn fetuses by taking cells from blastocysts from fertilized eggs that were really just in incubation tanks and would never be used. Some scientists recommended that we could use adult stem cells, which we all have in our bodies, rather than embryonic stem cells. A very unfortunate selection of the way those stem cells got labeled, but that is how they got labeled. Immediately the scientists said, "No, no, no, no, we can't do that. That would be much more complicated, and there is no way to get pluripotency," the ability to get an adult stem cell to turn into many other kinds of tissue cells. Well, research has gone on and there has been success on that.

Stem cell research is still moving along pretty slowly. There were great hopes, but even in the beginning, the better thinkers recognized that this was a 20- or 30-year path. We still don't know what the best ways of realizing cures to different medical conditions will be.

I am just pointing out this example because the scientists were wrong in this one, the scientists who rejected research on adult stem cells. Those of a religious persuasion had a case to be made. I am not saying that because I think we should allow religions to direct all of our public policy. That doesn't work in a liberal society, and we were founded very much on a society that would not let that happen. But I do think that we need to listen more closely to the intuitions of those that we disagree with.

JOANNE MYERS: Thank you for your wisdom, your words of caution, and for helping us understand the challenges ahead.

blog comments powered by Disqus

Read MoreRead Less