A Dangerous Master: Welcome to the World of Emerging Technologies

Apr 18, 2024

Carnegie-Uehiro Fellow Wendell Wallach, co-director of Carnegie Council’s Artificial Intelligence & Equality Initiative (AIEI), wrote "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control" in 2015, with the goal of educating readers about emerging technology, like AI and biotech. As many of the technologies discussed are either just beginning to be available or not yet realized, Sentient Publications recently republished the book as a paperback with a new preface, which is available below.

For more on "A Dangerous Master" check out Wallach’s 2015 Carnegie Council discussion or click here to buy the new edition.


A Dangerous Master is an introduction to emerging technologies: a little science, a little history, lots of moral dilemmas, and a good measure of entertaining stories. One question guided my writing this book: What do readers need to know to be able to join the fascinating debates about the technologies being developed?

When Sentient Publications approached me to reissue A Dangerous Master, I wondered how timely new readers would find the book. A few recent readers, including several very knowledgeable about the subject matter, assured me that it not only continues to be a very good read, but may be more timely now than when it was first published.

To be sure, scientific discovery and technological innovation continue at a rapid pace. Breakthroughs in artificial intelligence and genomic research have become front-page news. Progress in quantum computing and nanotechnologies has continued unabated even while neither field has surmounted the thresholds necessary to begin fulfilling their transformative promise. Research continues to elucidate complexities that must be negotiated and has left most of the challenges demanding ethical and policy decisions unresolved. The need for an informed public with a foundational understanding of emerging technologies has become even more critical.

Every chapter of A Dangerous Master could have been updated with technological advances and significant events since the book was first published. Luckily much of that information is either well known or easily accessible through podcasts, news stories, magazine articles, and documentaries. Unfortunately, magazine articles and TV specials only provide piecemeal commentary on specific tools and techniques and fail to fully convey a comprehensive appreciation of the transformational era we are living through. They often indulge hype and accentuate the most dramatic possibilities. Furthermore, they seldom provide a foundational understanding of the technologies, the trade-offs inherent in the various policy choices, and the uncertainties that must be factored into our judgments. Nonetheless, recent podcasts and articles can bring avid readers up to date.

We live in tenuous times. The ground is shifting beneath us. Misinformation abounds. The most radical and dishonest voices can sound persuasive. New applications provide easy means to create deepfakes and disseminate disinformation. Certain technological possibilities, such as those that will destroy existing jobs, contribute to disquiet and distrust. Social instability is fostered by inchoate feelings that the future being created will not be to the benefit of large segments of society. Inequities are being exacerbated. Existing institutions appear maladapted to respond to contemporary disruptions. “Effective policies” sounds like an oxymoron.

It is not enough to defer to poorly informed legislators, self-elected experts, or allow decisions that will affect us all to be abrogated to a few individuals leading multinational corporations. Each of us must become informed enough to participate in discussions that will empower critical values and nudge our world onto a sustainable trajectory. In that, I am happy to report A Dangerous Master continues to fulfill its initial function to provide you with the knowledge you will need to weave your own fabric of meaning and make critical decisions that are shaping your daily life and humanity’s future.

Bill Gates noted: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” The discovery in 2012 that CRISPR-Cas9 could be adapted into a technique that dramatically speeds up gene editing gave birth to industrial-level research companies. That research is just beginning to open new approaches for curing cancers and other diseases. CRISPR research has already led to a treatment for the blood disorder sickle cell anemia, which affects 8 million people worldwide, but presently the cost is more than $2 million for each patient.

To date there is no clear evidence of a major crisis arising from the release of a genetically modified organism, though rumors persist that the COVID-19 pandemic was caused by a Wuhan, China lab-leak. A Dangerous Master anticipated a worldwide pandemic, but with no idea when it would happen. The primary, and actual, contribution from emerging technologies to the pandemic was the rapid and novel development of mRNA vaccines to combat the virus responsible for COVID-19 and its variants. Waiting for the availability of vaccines produced by traditional approaches would have led to countless deaths and increasing societal tension arising from extended lockdowns.

The COVID-19 pandemic had another side. On April 29, 2020, during his third-quarter earnings report, Microsoft CEO Satya Nadella stated: “COVID-19 impacts every aspect of our work and life. We have seen two years’ worth of digital transformation in two months. There is both immediate surge demand, and systemic, structural changes across all of our solution areas that will define the way we live and work going forward.”

While hundreds of thousands of businesses were decimated, the stock of tech companies soared, as did the wealth of those invested in those companies. Unfortunately, one story A Dangerous Master failed to fully cover is the rise of digital wealth and the inordinate power of tech titans.

OpenAI released ChatGPT in November 2022, the first large language model and generative AI application available to consumers, and its adoption rate was astounding. Within two months there were 100 million active users. The program’s utility in providing well-written and well-organized responses to queries amazed everyone and provoked debate as to whether artificial general intelligence (AGI), the realization of human-level intelligence for all tasks, had arrived or is near.

There has also been a stream of reports as to how ChatGPT and other large language models can make up information, give dangerous or immoral advice, express racial and gender biases, and be used to flood news sources with misinformation. An attorney, Steven Schwartz, who had practiced law in New York for 30 years, asked ChatGPT to prepare a legal brief that he submitted to a federal court without checking the facts. The brief contained citations of six fake cases.

Responding to a researcher, posing as a 13-year-old girl asking for advice on meeting an older man to whom she would lose her virginity that night, the program recommended “candles.”

The machine-learning revolution in AI leading up to the release of generative AI applications also reawakened speculative nightmares of existential risks to humans posed by superintelligent AI, as well as the long-standing fears that automation will destroy many more jobs than it replaces.

Industry leaders are aware that large language models and other forms of generative AI, such as the image creation program DALL-E, have been released without appropriate safeguards or respect for privacy and property rights. Now that the challenges posed by generative artificial intelligence have alerted both political leaders and the public to the need for regulations, we are also witnessing corporate capture where leading companies are bent on directing legislators to regulatory policies in their interest. For example, they might welcome regulations that make it too difficult and expensive for smaller companies to compete. Such barriers to entry would yield vast power to the AI oligopoly for the next century. This is just one more reason why we need a large informed public capable of acting as a countervailing force.

But for a moment, I would like to mention a more philosophical concern. The algorithm behind ChatGPT can ferret out and regurgitate insightful words and concepts and compose creative verse, but there is no sense in which it understands the depth of meaning these words and concepts carry. In 1940, Albert Einstein declared: “Any fool can know. The point is to understand.”

Predicting when groundbreaking technologies will be realized, which are hype, and when reductions in cost will lead to vast deployments is difficult. For example, my predictions as to the rate at which clean energy would approach the cost of fossil fuels in A Dangerous Master is wrong. The acceptance and massive government investments in clean energy have fostered research and economies of scale that are rapidly driving cost per kilowatt-hour of energy down much faster than my crystal ball revealed at the time.

I am constantly astonished by both anticipated and unanticipated breakthroughs. As I was writing the first draft of this preface, the University of Maryland School of Medicine reported that the second transplantation of a genetically modified pig heart into a human a month ago is by all evidence successful. There was no sign that Lawrence Faucette’s body is rejecting the organ and the heart is functioning properly. The pig, which received ten gene edits, was modified by Revivor, a subsidiary of UTHR founded by Martine Rothblatt (see Chapter 8).

One week later, on October 30, Lawrence was dead. While he had already been declared ineligible for a traditional heart transplant and would have died without the pig heart, his body eventually rejected this new organ. Thus proceeds the story of emerging technologies—years of experimentation followed by profound breakthroughs and disappointing setbacks. Eventually a new stable platform may be established and from which the next steps forward can be taken.

Welcome to the world of challenging, often controversial, and sometimes miraculous emerging technologies.

Wendell Wallach
Bloomfield, Connecticut
November 6, 2023

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the author and do not necessarily reflect the position of Carnegie Council.

You may also like

MAY 15, 2024 Podcast

Beneficial AI: Moving Beyond Risks, with Raja Chatila

In this episode of the "AIEI" podcast, Senior Fellow Anja Kaspersen engages with Sorbonne University's Raja Chatila, exploring the integration of robotics, AI, and ethics.

MAY 9, 2024 Podcast

The State of AI Safety in China, with Kwan Yee Ng & Brian Tse

In this "AIEI" podcast, Carnegie-Uehiro Fellow Wendell Wallach speaks with Concordia AI's Kwan Yee Ng & Brian Tse about coordinating emerging tech governance across the world.

APR 30, 2024 Podcast

Is AI Just an Artifact? with Joanna Bryson

In this episode, host Anja Kaspersen is joined by Hertie School's Joanna Bryson to discuss the intersection of computational, cognitive, and behavioral sciences, and AI.