Mind Control to Major Tom: First State Regulates Use of Neurotechnologies

Oct 29, 2021

One of the last frontiers of science remains the human mind—but not for much longer. Scientists can already manipulate memories and emotions such as fear or anger, at the switch of a nanolaser, using a technique called optogenetics. Rafael Yuste, a biology professor at Columbia University, said that scientists "have already succeeded in implanting in the brain of mice images of things that they hadn’t actually seen which affected their behavior." Coupled with neuro-prostheses, neural probes, intra-neural tissue implants, and other developments in neurological research, the ability to control and manipulate the brain remotely is no longer science fiction. Where previously imaging permitted scientists to only observe specific kinds of disparate activity within the brain, now, functions of the brain are being mapped and altered in the same way that the human genome was, with similar attempts to intervene and manipulate neural function for many different purposes.

Last week, Chile became the first country to legislate on neurotechnology that can manipulate one’s mind, focusing on the rights to personal identity, free will, and "mental privacy," raising concerns that “scientific and technological development must be at the service of people and carried out with respect for life and physical and mental integrity,” the Chamber of Deputies said in a statement. What is hopefully the first State regulatory response of many addresses protecting citizens’ privacy and hopefully human rights. What this legislation does not address is use of these technologies outside of the context of peacetime—namely, during armed conflict or war.

These technologies have the ability to, and are actively being exploited by, the military to manipulate human behaviour and memory, or to induce fear or anger, just to give a couple of examples. Recent advances involving lasers are increasingly not applied in a ‘traditional’ weapons sense, in that their effects are neurological and reversible. Optogenetics provides the opportunity to influence the brain, at specific times, to exhibit specific behaviours. Increased understanding of how memory, emotion, and cognition work will also almost certainly result in the manipulation of these functions. The impact of these neural interventions has the potential to be magnified by other emerging technologies in armed conflict, particularly in conjunction with other emerging technologies, including motes, which are already in use, drones, and other automated systems.

The laws of war prohibit biological and chemical weapons, and many other types of weapons and warfare, but the regulation of intervention in the human brain (particularly in ways that are reversible) was not foreseen by the drafters of the laws of war decades ago and is a real and threatening application of technologies that needs further consideration—and potentially more thoughtful application of the existing law. The legal requirement to review new or modified weapons should include traditional weapons, but also any use of science that is weaponised—to protect "rights to personal identity," free will, and "mental privacy," not only in times of peace, but even more so in times of war.

This blog was originally published on October 19, 2021 by Cambridge University Press. Read more about the impact of new technologies on modern warfare in Kobi Leins' book New War Technologies and International Law, coming out November 2021.

Kobi Leins is a senior research fellow in digital ethics in the Faculty of Engineering and IT at the University of Melbourne and a non-resident fellow of the United Nations Institute for Disarmament Research.

You may also like

A Dangerous Master book cover. CREDIT: Sentient Publications.

APR 18, 2024 Article

A Dangerous Master: Welcome to the World of Emerging Technologies

In this preface to the paperback edition of his book "A Dangerous Master," Wendell Wallach discusses breakthroughs and ethical issues in AI and emerging technologies.

MAR 22, 2024 Podcast

Two Core Issues in the Governance of AI, with Elizabeth Seger

In this "Artificial Intelligence & Equality" podcast, Carnegie-Uehiro Fellow Wendell Wallach and Demos' Elizabeth Seger discuss how to make generative AI safe and democratic.

FEB 23, 2024 Article

What Do We Mean When We Talk About "AI Democratization"?

With numerous parties calling for "AI democratization," Elizabeth Seger, director of the CASM digital policy research hub at Demos, discusses four meanings of the term.