Imagine being in the military implanted a small computer device It enters the blood stream where a magnet can be directed to a specific part of the brain. With training, a soldier can control a weapon system thousands of miles away using only his thoughts. A similar type of computer could be implanted into a soldier’s brain to suppress their fears and anxieties, allowing them to perform combat missions more effectively. Going one step further, the equipped device artificial intelligence system It is possible to directly control the behavior of soldiers by predicting which option they will choose in their current situation.
These examples may sound like science fiction, but the science of developing similar neurotechnologies is already developing. Brain-computer interface, or BCI, is a technology that decodes brain signals and transmits them to an external device to perform a desired action. Basically, the user just has to think about what they want to do and the computer will do it.
BCIs are currently being tested in patients severe neuromuscular disorders and helping them regain daily activities such as communication and mobility. For example, patients can turn on a light switch by visualizing the action and using the BCI to decode the brain signals and transmit them to the switch. Similarly, patients can select a BCI cursor by focusing on specific letters, words, or phrases on the computer screen.
However, an ethical point of view could not keep up with science. While ethicists calls for more ethical inquiry Many practical questions about brain-computer interfaces have not been fully addressed in the context of neural transformations in general. For example, do the benefits of BCI outweigh the risks of brain hacking, data theft, and behavioral monitoring? Should BCIs be used to suppress or enhance specific emotions? What effects will BCIs have on the moral agency, identity, and mental health of their users?
These questions are of great interest to us, a philosopher and neurosurgeon and those who study the ethics and science of current and future BCI applications. Considering the ethical use of this technology before implementing it can prevent its potential harm. We believe that the responsible use of BCIs needs to protect people’s ability to function in many different ways. the center of being human.
Expanding BCIs beyond the clinical setting
Researchers are studying nonmedical brain-computer interface applications in many fields such as gaming, virtual reality, artistic performance, warfare, and air traffic control.
In 2018, the US military Defense Advanced Research Projects Agency A program was launched to develop a “secure, portable neural interface system capable of simultaneously reading and writing from multiple points in the brain.” Its goal is to produce a non-surgical BCI for able-bodied military personnel for national security needs by 2050. For example, a soldier in a special forces unit may use BCI to send and receive feedback from fellow soldiers and unit commanders. direct three-way communication It provides real-time updates and enables faster response to threats.
As far as we know, these projects have not opened a public discussion about the ethics of these technologies. While a US soldier I agree Practitioners note that the successful implementation of BCI “needs to overcome negative public and social perceptions.” ethical guidelines are needed better evaluate proposed neurotechnologies prior to use.
One way to address the ethical questions raised by BCI is to useful. Utilitarianism is an ethical theory that seeks to maximize the happiness and well-being of everyone affected by an action or policy.
Enhancing the military can do the best by improving a nation’s warfighting capabilities, protecting military assets by keeping troops at bay, and maintaining military readiness. Defenders of the benefits of neuroenhancement argue that emerging technologies such as BCI morally equal and other widely accepted forms of brain enhancement. For example, stimulants like caffeine can improve the brain’s processing speed improve memory.
However, some worry A utilitarian approach to BCI has an ethical blind spot. Unlike medical applications, which are designed to help patients, military applications are designed to help a country win a war. In this process, BCI can be crudely dominant for individual rights, such as the right to mental and emotional health.
For example, soldiers armed with drones in remote warfare are reported today high levels of depression, post-traumatic stress disorder and broken marriages compared to soldiers on the ground. Of course, soldiers choose to sacrifice for the greater good. But if neural enhancement becomes a job requirement, it will increase specificity concerns about coercion.
Another approach to BCI ethics, nerve rightsvalues certain ethical values even if they do not maximize the general welfare.
Nerve rights advocates protect the rights of individuals cognitive freedom, mental privacy, mental integrity, and psychological continuity. Cognitive freedom may prohibit unreasonable interference with a person’s mental state. The right to mental privacy requires the provision of a protected mental space, while the right to mental integrity prohibits specific harm to a person’s mental state. Finally, the right to psychological continuity can protect a person’s ability to maintain their sense of self over time.
BCIs can interfere with neural rights in a variety of ways. For example, if a BCI changes how the world appears to the user, they may not be able to distinguish their own thoughts and emotions from the altered version. It may violate intellectual rights such as mental privacy and mental integrity.
However, soldiers have already lost similar rights. For example, the US military allows it and restricting soldiers’ freedom of speech and religion in ways that are not commonly used by the public. Would it be any different to violate nerve rights?
A human ability approach It is emphasized that the protection of certain human capacities is essential for the protection of human dignity. While neuroscience is based on an individual’s ability to think, the ability view is based on a a wider range of what people can do and beto be mentally and physically healthy, to move freely from one place to another, to communicate with others and nature, to develop senses and imagination, to feel, express, play and recreate emotions, to manage one’s surroundings, etc.
We find the capability approach attractive because it creates a more robust image of humanity and respect for human dignity. Based on this view, we argued proposed BCI applications must reasonably protect all user-centric capabilities at the lowest possible threshold. BCIs intended to enhance capabilities beyond average human capabilities need to be used in ways that serve the user’s goals, not just others.
For example, a bidirectional BCI not only separates and processes brain signals, but also returns somatosensory feedback to the user, such as pressure and temperature sensations, which poses an unreasonable risk if the user’s ability to trust their own senses is compromised. Likewise, any technology such as BCI that tracks a user’s movements will violate their dignity unless they allow the user to override it.
A limitation of the ability view is that it is difficult to define what counts as threshold ability. This concept does not explain what new possibilities are worth pursuing. However, neural enhancements may change what is considered a standard threshold and eventually introduce entirely new human abilities. To address this, it is necessary to add a capability approach to a full ethical analysis that aims to answer these questions.
By Nancy C. JackerProfessor of Bioethics and Humanities at the University of Washington School of Medicine and Andrew Co, assistant professor of neurosurgery at the University of Washington. This article is republished from The Conversation under a Creative Commons license. Read the original article.