Mind Control to Major Tom: First State Regulates Use of Neurotechnologies

Oct 29, 2021

One of the last frontiers of science remains the human mind—but not for much longer. Scientists can already manipulate memories and emotions such as fear or anger, at the switch of a nanolaser, using a technique called optogenetics. Rafael Yuste, a biology professor at Columbia University, said that scientists "have already succeeded in implanting in the brain of mice images of things that they hadn’t actually seen which affected their behavior." Coupled with neuro-prostheses, neural probes, intra-neural tissue implants, and other developments in neurological research, the ability to control and manipulate the brain remotely is no longer science fiction. Where previously imaging permitted scientists to only observe specific kinds of disparate activity within the brain, now, functions of the brain are being mapped and altered in the same way that the human genome was, with similar attempts to intervene and manipulate neural function for many different purposes.

Last week, Chile became the first country to legislate on neurotechnology that can manipulate one’s mind, focusing on the rights to personal identity, free will, and "mental privacy," raising concerns that “scientific and technological development must be at the service of people and carried out with respect for life and physical and mental integrity,” the Chamber of Deputies said in a statement. What is hopefully the first State regulatory response of many addresses protecting citizens’ privacy and hopefully human rights. What this legislation does not address is use of these technologies outside of the context of peacetime—namely, during armed conflict or war.

These technologies have the ability to, and are actively being exploited by, the military to manipulate human behaviour and memory, or to induce fear or anger, just to give a couple of examples. Recent advances involving lasers are increasingly not applied in a ‘traditional’ weapons sense, in that their effects are neurological and reversible. Optogenetics provides the opportunity to influence the brain, at specific times, to exhibit specific behaviours. Increased understanding of how memory, emotion, and cognition work will also almost certainly result in the manipulation of these functions. The impact of these neural interventions has the potential to be magnified by other emerging technologies in armed conflict, particularly in conjunction with other emerging technologies, including motes, which are already in use, drones, and other automated systems.

The laws of war prohibit biological and chemical weapons, and many other types of weapons and warfare, but the regulation of intervention in the human brain (particularly in ways that are reversible) was not foreseen by the drafters of the laws of war decades ago and is a real and threatening application of technologies that needs further consideration—and potentially more thoughtful application of the existing law. The legal requirement to review new or modified weapons should include traditional weapons, but also any use of science that is weaponised—to protect "rights to personal identity," free will, and "mental privacy," not only in times of peace, but even more so in times of war.

This blog was originally published on October 19, 2021 by Cambridge University Press. Read more about the impact of new technologies on modern warfare in Kobi Leins' book New War Technologies and International Law, coming out November 2021.

Kobi Leins is a senior research fellow in digital ethics in the Faculty of Engineering and IT at the University of Melbourne and a non-resident fellow of the United Nations Institute for Disarmament Research.

You may also like

JUN 3, 2024 Podcast

The Intersection of AI, Ethics, & Humanity, with Wendell Wallach

In this wide-ranging discussion, Carnegie Council fellows Samantha Hubner & Wendell Wallach discuss how thinking about the history of machine ethics can inform responsible AI development.

MAY 15, 2024 Podcast

Beneficial AI: Moving Beyond Risks, with Raja Chatila

In this episode of the "AIEI" podcast, Senior Fellow Anja Kaspersen engages with Sorbonne University's Raja Chatila, exploring the integration of robotics, AI, and ethics.

MAY 9, 2024 Podcast

The State of AI Safety in China, with Kwan Yee Ng & Brian Tse

In this "AIEI" podcast, Carnegie-Uehiro Fellow Wendell Wallach speaks with Concordia AI's Kwan Yee Ng & Brian Tse about coordinating emerging tech governance across the world.