Long-termism: An Ethical Trojan Horse

Sep 29, 2022

Recently the philosopher William MacAskill, with his book What We Owe The Future, has been popularizing the idea that the fate of humanity should be our top moral priority. His core proposition is that today's 8 billion humans are vastly outweighed in importance by the hundreds of billions of humans who could live in future generations if we can avoid wiping out humanity in the near term.

MacAskill's argument is known by the slogan "longtermism," (often written as long-termism) and it has already been sharply criticized. For example, columnist Christine Emba has written in The Washington Post: "It's compelling at first blush, but as a value system, its practical implications are worrisome." In practice, she explains, it implies seeing "preventing existential threats to humanity as the most valuable philanthropic cause"—which means we should invest far more in addressing risks that threaten humanity's very long-term existence.

As Emba says, this can seem impossible to disagree with. Think of climate change: Most of us would now agree that for decades we have been underestimating the threat of environmental collapse, and that with hindsight we should have been more willing to sacrifice some of our living standards to speed the transition away from burning fossil fuels.

The difficulty comes when you consider trade-offs. Exactly how much should be sacrificed today to increase the chances of future generations surviving various perceived threats? How speculative of a future threat can justify current sacrifices? Who should sacrifice and when? And who gets to decide when and who sacrifices, and what they will have to sacrifice?

To take an extreme example, consider a hypothetical proposal to devote a quarter of all human resources to minimizing the risk of a large asteroid or comet striking earth. Imagine governments mandating universities to focus primarily on asteroid-tracking technology, and requisitioning factories to make parts for missiles to divert the path of oncoming asteroids.

Such a proposal would be absurd, because an asteroid strike is unlikely to threaten humanity any time soon, perhaps for millions of years. But the threat that a strike could happen in the near term is realistic enough that we should clearly devote some fraction of human resources to addressing it. The Hollywood-blockbuster movie Don't Look Up was intended as an allegory for climate change, but also amusingly explores how ill-prepared humanity might be if the asteroid threat were to suddenly materialize.

How much time and capital exactly should be directed at asteroid or comet defense? That question does not have an obvious answer, and asteroid strikes are only one of many speculative long-term threats—a significant percentage of which are linked to human technologies. We have already developed one technology that has the power to make the planet uninhabitable—nuclear weapons—and we invest significant time and political capital in efforts to make sure this technology is never used.

We are now working on other technologies with comparably destructive potential. How much should we invest in addressing the risks of pandemics caused by human-engineered pathogens, for example? Or artificial intelligence-powered systems becoming smarter than humans in performing human-like tasks, but insufficiently aligned with core human values?

Science fiction is useful for identifying the potential threats, but it can take us only so far in considering real-time trade-offs. Traditionally, writers of science fiction have wanted to caution us against hubristically choosing to pursue technologies without understanding their undesirable impacts or nefarious uses. We heed this advice when, for example, we regulate what kind of gene-editing research can be carried out to prevent new and lethal pathogens or bioweapons being created.

But that is only one approach. Another is deliberately researching lethal pathogens in biolabs in the attempt to better prepare and develop treatments for imaginable future pandemics. Which approach seems better will depend, in part, on whether we think the kind of research that could create deadly pathogens also offers scope for insights that might lead to other breakthroughs.

Advancing gene-editing technology could potentially enable the engineering of future generations of humans whose capabilities surpasses this generation. If we give high moral weight to these future generations, we might feel obliged to pursue the technology while doing our best to mitigate the risks. By contrast, if we prioritize minimizing risks to the wellbeing of people alive today, we might prefer to more strictly limit what research can be pursued.

Creating new pathogens to study how to treat them poses the risk that the pathogen could "escape" the biolab facility before a treatment is developed. Biosecurity expert Filippa Lentzos points out that we do not know how big a risk we are running: "At present, there is no requirement to report these facilities internationally, and no international entity is mandated to collect information on the safety and security measures they have in place, or to provide oversight at a global level."

Yet there is another more fundamental problem: Resources we devote to preparedness for future pandemics cannot also be spent on treating today's preventable diseases and chronic illnesses. There will always be an inherent tension between the certain gains of making life better for people alive now, and the uncertain gains of maximizing the speculative benefits and minimizing the conjectural risks that face potential future generations.

We have no doubt that philosophers like William MacAskill have good intentions. Many of our colleagues and friends share his concerns, and some are leading voices on the need to prioritize long-term existential risks. However, we worry that these legitimate concerns can easily be distorted and conflated with personal desires, goals, messianic convictions, and the promotion of deeply embedded political agendas and corporate interests.

To see why, imagine you are a billionaire who has invested in developing a technology that could do various forms of harm in the short term, and even wipe out humanity in the long term. You worry about the public and governments focusing on the short-term harms and damaging your profits by regulating how the technology can be developed and used.

To make this less likely, you might decide to invest in boosting the profile of thinkers who focus instead on long-term existential risks, while making hyperbolic claims about long-term benefits. This strategy is particularly evident in discussions of artificial intelligences risks and benefits. Developers and investors hope that by persuading the public that the really "big" threat is being addressed, they will be sanguine about more immediate problem and shortcomings. They hope to create the impression that harms being done today are worth enduring because they will be far outweighed by the benefits promised for tomorrow when the technology matures. Such a strategy masks the possibility that the longer term risks will far outweigh the short term benefits of specific applications.

It is no coincidence that institutes working, for example, to anticipate the existential risks of artificial general intelligence get much of their funding from the very same billionaires who are enthusiastically pursuing the development of cutting-edge AI systems and applications. Meanwhile, it is much harder—if not impossible—to get funding for research on those cutting-edge applications that are being applied today in ways that boost profits but harm society.

The well-intentioned philosophy of long-termism, then, risks becoming a Trojan horse for the vested interests of a select few. Therefore we were surprised to see this philosophical position run like a red thread through "Our Common Agenda," the new and far-reaching manifesto of United Nations Secretary-General António Guterres.

Guterres writes: "[N]ow is the time to think for the long term […] our dominant policy and economic incentives remain weighted heavy in favor of the short term and status quo, prioritizing immediate gains at the expense of longer-term human and planetary well-being." He points out that current and future generations will have to live with the consequences of our action or inaction and posits that "humanity faces a series of long-term challenges that evolve over multiple human life spans."

At first glance, as we have seen, sentiments like these sound like the kind of thing nobody could disagree with. However, we believe that this rhetoric is detrimental. It risks giving credence to agendas that serve the short-term interests of political, economic, and technological elites in pushing the development of technologies that have clearly demonstrated the potential to exacerbate inequalities and harm the wider public interest.

We are not saying that this is the intention. But we are concerned about what the harmless-sounding words could be used to justify, especially when Guterres proposes to establish a "Futures Laboratory" to "support States, subnational authorities and others to build capacity and exchange good practices to enhance long-termism, forward action and adaptability." (emphasis added)

If the basic idea of long-termism—giving future generations the same moral weight as our own—seems superficially uncontroversial, it needs to be seen in a longer-term philosophical context. Long-termism is a form of utilitarianism or consequentialism, the school of thought originally developed by Jeremy Bentham and John Stuart Mill.

The utilitarian premise that we should do whatever does the most good for the most people also sounds like common sense on the surface, but it has many well-understood problems. These have been pointed out over hundreds of years by philosophers from the opposing schools of deontological ethics, who believe that moral rules and duties can take precedence over consequentialist considerations, and virtue theorists, who assert that ethics is primarily about developing character. In other words, long-termism can be viewed as a particular position in the time-honored debate about inter-generational ethics.

The push to popularize long-termism is not an attempt to solve these long-standing intellectual debates, but to make an end run around it. Through attractive sloganeering, it attempts to establish consequentialist moral decision-making that prioritizes the welfare of future generations as the dominant ethical theory for our times.

Long-termism grew out of the effective altruism movement, inspired by the utilitarian philosopher Peter Singer. However, it represents a fundamental shift from Singer's original focus on humanitarian causes. Singer argued that it is immoral for us to spend on luxuries when the money could instead be invested in humanitarian causes to alleviate the suffering of the disadvantaged.

By analogy, long-termist logic suggests that it is immoral for us to spend on alleviating suffering in the here and now when the money could instead be invested in shaping the future. Disputing this logic is often referred to as "present bias."

But how much should we sacrifice today for the lives of hypothetical future beings who might not even live physically, but in the metaverse? As the scholar Emile Torres puts it, "Critics might well charge that focusing on digital people in the far future can only divert attention away from the real-world problems affecting actual human beings." The core difficulty with long-termism is that such ethically dubious implications are easy to hide behind the shield of admirable sentiments.

When asked about his involvement in writing "Our Common Agenda," for example, William MacAskill responded with points which nobody could disagree with, such as that "many of the challenges that seem to be most important from the perspective of positively steering the future involve global public goods. "

MacAskill also suggests: "Perhaps there are even certain areas of AI that we actually want to like regulate on a global scale or slow down at least on a global scale because we think that they pose more risks and dangers and are unlike most uses of AI that will be extremely beneficial. And the UN has that convening power, it has that soft power, and so it could potentially help."

The international governance of AI is a goal shared by many, including Carnegie Council for Ethics in International Affairs. But, as the saying goes, the devil is in the details. How to distinguish between those areas of AI that pose risks and dangers to "most uses of AI" that MacAskill believes would be extremely beneficial? Long-termist sentiments could deflect away from the case for near-term governance of novel applications.

"Our Common Agenda" is up for discussion, culminating with a "Future Summit" in 2023. Clearly, the aim of preventing technologies from destroying humanity is a good one. It may well need more funding and attention. But from which other social goods and core human rights should we shift funding and attention?

Rather than slogans, we need deep reflection over the tension and trade-offs between long-term and nearer-term goals. But if a slogan is required to sum up our disquiet with "Our Common Agenda," and its implied embrace of long-termism, one need look no further than the title of another UN report from 2019: "The Future is Now."

Indeed, from our perspective acts of kindness and care for those alive today are prerequisites for any viable future.

Anja Kaspersen is a senior fellow and Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs, where they co-direct the Artificial Intelligence & Equality Initiative (AIEI).

You may also like

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.

FEB 21, 2024 Podcast

Prepare, Don't Panic: Navigating the Digital Rights Landscape, with Sam Gregory

Senior Fellow Anja Kaspersen speaks with Sam Gregory, executive director of WITNESS, about the challenges and opportunities presented by synthetic data, AI-generated media, and deepfakes.