Neuroscience has almost surely grown faster than any other interdisciplinary area over the past decade. The Society for Neuroscience is host to one of the biggest science meetings in the world, drawing about 40,000 attendees from disciplines including neurology, psychology, computer science, radiology, and psychiatry, as well as my own field of bioethics.
My fascination with the ethics of neuroscience research is rooted in two distinct experiences. First, I wrote a book called Undue Risk: Secret State Experiments on Humans (W.H. Freeman, 2000), about the history and ethics of human experiments conducted for national-security purposes. My work on that book — in addition to my role as a staff member for a presidential advisory commission on radiation experiments on humans — sensitized me to the complex relationship between science, ethics, and national-defense needs. Understanding that American science is essential for our military superiority, every presidential administration since World War II has provided federal largess to support research, creating a relationship between academe and government that has served the country well.
Second, as I followed developments in neuroscience through the popular press and scientific publications, and in my work as a consultant for science organizations, I noticed that many of the most exciting experiments were supported by national-security agencies, such as the Defense Advanced Research Projects Agency, known as Darpa. Yet press coverage of neuroscience experiments usually mentions the source of funds only in passing.
I wondered what security agencies’ financial support might say about their interest in the long-term contributions of neuroscience to national security. Though that question seemed to me to be the 800-pound gorilla in the neuroscience lab, to my amazement, no one else — including the neuroscientists — appeared to be asking it in any systematic way.
In one important sense, that is perfectly understandable. Scientists focus on their particular research aims, not on the long-range interests of financial supporters. What seems to an investigator to be a very limited research question can be seen by a security agency as part of a larger pattern. And of course, even scientific advances that do not stem from military-sponsored research can be adapted for security purposes later.
Not all the neuroscientists I spoke with were enthusiastic about discussing such issues on the record, to put it mildly. Their reluctance only served to confirm my sense that those matters were of more than passing interest, and led to my new book, Mind Wars: Brain Research and National Defense (Dana Press, 2006).
Fortunately for my project, not all scientists or former agency employees were unwilling to talk with me, and much information is a matter of public record. I came to appreciate the way that Darpa in particular does business, for it is a science agency, not a spy agency, and the vast majority of its work is done in concert with scholars. Thus although many of the studies that raise interesting ethical and social questions are sponsored by Darpa, that does not imply that the agency should not be supporting them, nor that the research should not be done.
Of course scientists in any field are understandably reluctant to make comments that could jeopardize future financial support for their work, but there is a special sensitivity — in some cases, almost paranoia — about the suggestion that scientific research is leading to “mind reading” or “mind control.” That sensitivity is partly left over from the early days of the cold war, when U.S. government officials suspected that treasonous statements by prisoners of war in North Korea were the result of “brainwashing.” Twenty years later, we learned that the Central Intelligence Agency and the U.S. Army had themselves engaged in mental-manipulation experiments that included the use of hallucinogens like LSD.
People “read” each other’s minds all the time, sometimes unconsciously and relying on various cues such as body language, and our minds are “controlled” in countless ways, from natural stimuli like odors to pop-up Web ads. But most of us get nervous when we imagine that some distant authority could have access to what we like to think of as private thoughts, or that some deliberate and fairly precise means can be used to alter our cognition or behavior in accord with someone else’s strategic purpose.
I don’t know if any of the contemporary research projects that I discuss in my new book qualify as mind reading or mind control, but some of them seem pretty close. Certain brain-scanning techniques, especially functional magnetic resonance imaging (fMRI), have stimulated a huge amount of research attempting to correlate neural activity with specific tasks or experiences.
In one famous and controversial study, negative automatic responses by white research subjects to photographs of black faces have been correlated with activity in the amygdala, which processes emotion in the presence of stimuli. Or take the example of “prisoner’s dilemma” experiments, in which both subjects benefit when they cooperate with each other. When the subjects do well, neurotransmitters activate pleasure centers in the brain. Some neuroscientists claim that fMRI can already show when subjects are thinking of a certain number, when they are lying, or what their sexual orientation is — and that the technique will make even more refined and precise analyses possible in the years ahead.
Other studies focus on replacing old-fashioned lie detectors with systems based on neuroscience. The hope is that the new techniques would not only be more reliable, but that they could replace torture and other physically aggressive means of interrogating terror suspects and enemy operatives.
It’s not hard to see additional security implications of such technical capabilities. For instance, a spy agency could measure the neurotransmitter secretions of candidates for special missions, to see how they react to stress. Military personnel in information-rich environments, like cockpits, could have their brain functions monitored for information overload, and officers behind the front lines could modify the flow of data accordingly, using devices now being developed to provide “real time” remote brain imaging.
Other direct interventions to enhance soldiers’ capabilities could come in many forms, including new generations of neuropharmaceuticals, implants, and neural stimulations. New antisleep agents like modafinil (which, under the brand name Provigil, readers’ students may already have discovered) are replacing old-fashioned amphetamines among fighter pilots as well as globe-trotting business executives. Darpa’s “peak soldier performance” program aims to improve metabolism on demand so a soldier could operate at a high level for three to five days, without needing sleep or calories, except perhaps high-nutrition pills.
Darpa is also interested in increasing the “bandwidth” of soldiers’ brains. One idea is to develop something called a “brain prosthesis,” a chip that — if it could be made to work — would help restore mental functioning in people who have epilepsy or have had strokes. But experts disagree about whether such a device, intended to treat a medical condition, could also improve normal mental functioning.
Or perhaps extra copies of genes that code for certain neural receptor sites could be introduced in the brain, to improve learning skills; that has been done in mice, in the lab of Joe Z. Tsien, of Boston University. Electrical stimulation has been used with some success as an adjunct to standard rehabilitation techniques for stroke victims — could it improve cognitive functions in healthy individuals?
Intelligence and endurance are not the only traits that make a good soldier. Another, of course, is the ability to manage fear. In an interesting experiment, Gleb Shumyatsky’s research team at Rutgers University, in Piscataway, N.J., found that mice bred not to have the gene stathmin did not exhibit normal fear behavior, such as freezing in place, as often as normal mice did when exposed to things like a mild shock. Stathmin is expressed in the amygdala and is associated with both innate and learned fear. The mice without stathmin froze less often because they had impaired learning capacity.
It is unlikely that a particular gene in humans corresponds so precisely to fear, given differences in the way mouse and human genes are expressed. But given past hype about, say, the so-called gay gene, it is easy to imagine an overly enthusiastic official proposing to screen recruits for the “fear gene.”
Where there is fear, there is often long-term trauma. Trauma victims who had been given the beta-blocker propranolol — which is normally used to treat heart disease but which inhibits the release of brain chemicals that consolidate long-term memories with emotion — scored lower on a scale measuring post-traumatic stress disorder than did members of a control group after a month of psychological counseling. The difference was not statistically significant, but another result is more important: Three months later, none of the beta-blocker recipients had elevated physiological responses when asked to recall their traumatic experiences, while 40 percent of the control-group members did.
Those results give hope to sufferers of post-traumatic stress disorder. They also raise the question of whether the drug could be given prophylactically, before a person enters what could be a traumatizing situation. How would we feel about preventing the disorder in young soldiers going into battle — preventing both a lifetime of harrowing memories as well as the soldiers’ capacity to connect horrific experiences with negative emotions? Do we really want guilt-free soldiers?
The hair on the back of readers’ necks may be rising at the prospects ahead of us, and their ethical and legal implications. How are we to ensure that interventions like those to manage guilt and fear would be confined to military operations against truly dangerous adversaries and not more widely adopted, perhaps by civil authorities — or criminals?
The importance and difficulty of regulation become even more impressive when we consider that many of the technologies involved would be “dual use,” not only useful tools in military situations but also promising advances in health care. The same sort of device that would allow an officer to see if a pilot was receiving too much information, say, could permit a nurse in a doctor’s office to check up on the welfare of a brain-injured patient at home. There are also commercial possibilities, of course. For instance, businesses are already intrigued by the possibilities of using brain-imaging techniques to conduct market research.
Much of the history of bioethics might be read as a 40-year conversation about the prospects for changing human nature through startling developments in the life sciences. Bioethicists have largely played down such concerns, noting the extent to which we already deliberately change ourselves in all sorts of low-tech ways, like using sleep medication or taking French lessons.
However, an alternative view has recently been getting more attention. Its supporters — including Leon R. Kass, professor in the Committee on Social Thought at the University of Chicago and former chairman of the President’s Council on Bioethics — contend that practices like new reproductive technologies, while attractive and seemingly benign, have profound but unpredictable societal implications. The debate about what types of enhancement are permissible, given the risks, should be expanded to include the sorts of interventions I have described, especially given the clout of national-security funds and goals.
The defense implications of modern neuroscience also raise specific policy questions about civil liberties, regulation, and safety. We’re familiar with the role of atomic scientists in the control of nuclear weapons, and more recently biologists have become key players in planning for defense against bioterrorism. The same is not yet true of neuroscientists, partly because the idea that neuroscience could be involved in national security is only now becoming clear, and partly because neuroscience is a complex interdisciplinary field whose practitioners work in separate silos. But the day is rapidly approaching when we will have to consider those issues in a much more systematic way.
One model is the National Science Advisory Board for Biosecurity, established in 2004 in the National Institutes of Health. The board advises all cabinet departments, including the Department of Defense. Nongovernmental experts on the board supplement the expertise in federal agencies on the ways that new developments in biological research could be misused to threaten public health and national security.
A National Science Advisory Board for Neurosecurity could deal with analogous problems and bring together neuroscience disciplines that have not yet talked to each other about appropriate policies for brain research and national defense. It, too, could be administered by the NIH and provide advice to the Department of Defense, the Department of Homeland Security, and other relevant agencies.
Finally, programs in neuroscience are springing up at colleges and universities around the country. The programs should include discussions of science policy, such as: How do the sources of research funds affect the direction of science and social change? Which uses of brain research are acceptable, and which are not? And what limits should society, perhaps acting through scientific associations, place on the acceptable applications of neuroscience?
Whatever the future holds for neuroscience, it would be naïve to suppose that national-security organizations are not monitoring developments in that field as they do in any other. It is time to start a reasoned public conversation about the role of brain research in national defense.