The complexity of the human brain is as fascinating as it is mystifying. Increasing our understanding of its ability to synthesise information, respond quickly to stimuli, and remodel itself could unlock many opportunities to benefit society.
It is no surprise, then, that a recent UNESCO report has shown research and patents relating to neurotechnology have increased twentyfold in two decades. When we combine this insight with our latest horizon scan, which highlights neurotechnologies as an area where there is potential for developments to spark ethical concern, it becomes pertinent for us to reflect upon what has changed since the publication of our novel neurotechnologies report in 2013.
At that time, we were particularly interested in how they were being used to intervene in the human brain in both clinical practice and non-medical settings. We found there were inconsistency and fragmentation of the regulatory landscape, which we warned could present issues in assuring safe use.
Therapeutic vs. non-therapeutic
Ten years on, the adoption of neurotechnologies outside of traditional clinical environments has expanded, and the purposes for which they are being used could be suggested to traverse both the therapeutic and non-therapeutic. Indeed, changes in the way healthcare is delivered and what we understand to be ‘health’, might merit revisiting whether a clear ethical distinction between therapeutic and non-therapeutic can still be drawn.
Should we understand ‘therapeutic use’ as being limited to the treatment of diagnosable illness and injury, or should it be expanded to uses to address inequities, for example? New or refined answers to old ethical questions may be needed for policymakers to effectively oversee the neurotechnological space.
Another related question might be whether therapeutic uses of neurotechnology should be limited to treatment of neurological or psychiatric disorders. For example, if data collected by a brain-computer interface offers insight into the functioning of other bodily systems, should we expand its use?
When novelty runs out
Another issue that has come into focus in recent years is the consequence of technology becoming obsolete. For example, issues have arisen with implantable medical devices that are relied upon, but are no longer being maintained by the manufacturer. This leads to a lack of continued benefit and potential safety issues for patients when the device stops functioning as it should.
It also raises the question of how the possibility of technology becoming obsolete should shape the consent process, and whether it should be discussed with patients before they agree to treatment.
In 2013, we identified safety, privacy, autonomy, equity and trust as interests that merit protection when researching, designing and using neurotechnologies.
Ensuring that interventions aimed at promoting equity do not unintentionally have the opposite effect can be challenging. For example, Dr Emma Meaburn, our visiting senior researcher, recently blogged about how polygenic scores can inform us of an individual’s genetic predisposition to traits and disorders relevant to education. This information could potentially be used to direct early educational resources and support to those who are more likely to struggle in school. However, polygenic scores come with a host of assumptions and biases, meaning there is a real risk that instead of mitigating social and educational inequities as intended, they might further exacerbate them.
Is the same true for neurotechnologies? When it comes to using neurotechnologies, is the perpetuation of inequity a compromise we can or should tolerate to provide benefit? This is a key ethical question.
So, while it remains the case that our 2013 report provides a solid foundation for researchers and policymakers, developments suggest review is needed to ensure ethical oversight is maintained and interests are protected in this dynamic and evolving space.
We look forward to exploring this further and are interested to hear your thoughts too. Please do post suggestions in the comment box below.
The framework laid down in this report is still an excellent guide for researchers navigating huge uncertainties. However, the changes in the world since 2013 concern not only technological capabilities but the incentivisation, prioritisation and delivery of research. Commenting as a research management professional, I would welcome a review of the recommendations, and perhaps of the application of the framework, within the evolving context: acknowledging the impacts of brexit, covid, the rise of AI, the economic agenda driving research, and specific changes in relevant legislation.
It will be very important to reflect on the very question what should count as a neurotechnology and whether it continuous to make sense to differentiate between neurotechnologies, versus non-technological neurointerventions, and more importantly, whether it is justifiable to investigate the former with ethical scrutiny but leave the latter largely uninvestigated from an ethical point of view.
Traditionally, the assumption had been that neurotechnologies, like most prominently, DBS raise special ethical questions because they are invasive (penetrating the skull), impact people directly (via electronic stimulation of brain tissue) and render them passive recipients of a new mode of thought or behaviour. However, allegedly non-technological approaches like extensive lifestyle psychiatry can also be perceived as highly (psychologically and social) invasive in people's life, digital phenotyping in mental health care can render patients passive deliverers of data. At the same time, to be successful in depression treatment, TMS requires a long-term active involvement of patients, DBS procedures are preceded by intensive reflection and considerations by patients on whether, when en how to get involved in such treatments, and in neurofeedback the divide between direct and indirect becomes totally blurred as the idea is to directly change brain function via mental, i.e. indirect, efforts. To bring the field of neurotechnology further, it will be crucial to stop focusing on 'technology' in the strict sense of the word, isolating some techniques from the (social, medical and personal) world in which they are embedded, but instead investigate what interventions bring about change in human thought and behavior, and potentially their brains, and how these interventions are to evaluated from an ethical point of view, considering their impact on significant human relationships.
It will be important to ensure that the use of neurotechnology for therapeutic, remedial purposes is widely available to all who might benefit. There is the possibility that the various treatments developed might only be available to those with significant financial means. A medium-term concern in the development of neurotechnology is its extension beyond remediation to enhancement which might be achieved through both medication and implants. The enhancement might be cognitive and/or physical. Until recently this possibility has been in the domain of dystopian science fiction, but history tells us that once a technology has been developed it can’t be ‘un-invented’ and that its exploitation will radiate out from its inception in a wide variety of ways, with both intended and unintended consequences driven in no small extent by the perceived ability of the technology to make money for investors. I wonder if the Council will be considering the feasibility and time scale of such developments and how society might respond..
Nice blog Natalie. Hadn't really thought of the possibilities of neurotech directly to address educational inequalities. It'll be interesting to see what the concrete options are.