The news has been full of debates in recent weeks about the legitimate and illegitimate interests that national governments might have in collecting information about our online activity. Data about our internet use, however, is not the only kind of electronic trail that we might leave behind us that others might find interesting.
A fortnight ago the medical device company Medtronic announced
the first implantation in a patient of a deep brain stimulation (DBS) device that is capable of collecting data about users’ brain activity at the same time as it delivers neurostimulation. The Activia PC+S DBS system received a CE mark
at the start of this year, meaning that it can be marketed for approved medical uses throughout Europe.
, which involves the implantation of electrodes in the brain to deliver electrical impulses, is used to help alleviate the symptoms of serious conditions such as Parkinson’s disease and epilepsy. There is also increasing interest in using it in psychiatric and mental health disorders such as depression and obsessive-compulsive disorder. DBS has been used for therapeutic purposes for two decades, but the collection of data by these devices is a recent development.
DBS offers real benefits to people with serious conditions who lack other effective treatment options, but it can also bring side-effects such as changes in mood, memory or verbal fluency. Information collected from implanted neurodevices could allow the parameters of neurostimulation to be calibrated in order to maximise therapeutic effects and minimise unwanted ones.
DBS devices are not the only neurodevices that could be used to collect data about brain activity. In May this year Lancet Neurology published a proof-of-concept research study
involving a ‘partially-invasive’ brain-computer interface (BCI) device, developed by NeuroVista, which continuously monitors brain activity in epilepsy patients using electrodes positioned on the brain’s surface. From this, abnormal activity may be identified and the information transmitted wirelessly to a hand-held device that uses coloured lights to warn users of the probability of an impending seizure. It is hoped that the data collected in this way will be more effective than existing methods in predicting the onset of seizures and will greatly improve the quality of life of those living with serious epilepsy.
Data collected directly from neurodevices not only offer benefits for individual patients’ treatment, but are also potentially of considerable public value for health research. This is particularly so given how little is still known about how DBS actually achieves its therapeutic effects, despite years of clinical use. Our report Novel neurotechnologies: intervening in the brain
, published in June, emphasises the need for more, high quality data on the benefits and risks of medical devices such as those used in DBS so that patients and clinicians can make decisions based on sound evidence. Yet there is currently no reliable method for collecting evidence of how these devices are working once in use. Our report recommends the creation of registers of clinical experiences and outcomes to aid the collection and dissemination of this evidence. Data collected by neurodevices themselves are an exciting development that could contribute valuable content for registers of this kind.
However, alongside these new opportunities for improving patient care and gaining a better understanding of what happens when we intervene in the brain, comes the responsibility to attend to any potential ethical and legal issues they might raise.
At the moment it is pure science fiction to suggest that getting hold of data about someone’s brain activity would permit you to read their minds. But these data nevertheless convey sensitive information about an individual’s brain functions, health and medical treatment. Despite the impossibility of using these data to read the contents of our thoughts, we may nevertheless have a powerful sense that what happens in our brains is especially “about us” and thus have particular concerns about how it is accessed and used by others. In addition to this, the kinds of neurological and mental health disorders that novel neurotechnologies are used to treat can be sources of stigma. If these sensitive data are identifiable as belonging to an individual patient, there are legal obligations under the data protection regime, as well as important ethical considerations, in respect of how they should be handled to protect patients’ privacy and to guard against discrimination.
Creative and flexible systems for ensuring the ethical governance of sensitive data about brain activity are likely to be particularly important given that the kinds of neurodevices we are looking at here involve the automated
collection of data. Under more typical circumstances, for example when we visit a doctor and provide our medical history and a blood sample, we can usually discuss with our doctor there and then whether this information reveals anything particularly unusual, and how it might inform our care or be used for wider health research purposes. However, when a neurodevice collects data on a continuous basis over an extended period, it is harder to predict at the time it is implanted what could be later revealed, how this might be seen as significant or who might be interested in using it.
Ensuring that patients understand and have, where required, consented to particular research uses of their medical information is especially tricky when the answers to some questions (such as who might seek access to their data, who might wish to fund or conduct research using these and for what purposes) are impossible to answer at the time the data are collected. These challenges are not newly raised by data-collecting neurodevices; they have been considered in detail in respect of the ethical governance of health research resources such as biobanks and long-term cohort studies. In those situations various solutions, such as the use of broad consent agreements, supported by enhanced independent ethical oversight provisions, or the substitution of one-off, up-front consent arrangements by on-going relational consent processes, have been proposed. These kinds of models might be usefully extended to circumstances in which neurodevices are used to collect data.
One eventuality that even the most open and discursive consent processes cannot guard against, however, is the illegitimate interception of sensitive information. This could be a particular risk where brain activity data are transmitted wirelessly, as is likely to be the case where it is collected by devices implanted in the brain. At the moment, this risk of interception or hacking of wireless transmissions by these devices remains a somewhat speculative concern. Our report
recommends that the UK regulator, the Medicines and Healthcare products Regulatory Agency, should monitor the situation. However, it is not too early for those developing neurodevices designed to capture brain activity data themselves to attend to potential points of vulnerability in its secure storage and transmission.
Neurodevices that offer the potential to tell us how our brains and therapeutic interventions are working offer considerable individual and public benefits. However, they also pose possible challenges to protecting users’ privacy. We may see a parallel with debates the about governments’ interests in our online activity here: it may not be sufficiently reassuring to know that the intelligence services are not reading the contents of our emails if they nevertheless know to whom we are writing; similarly just because data about brain activity do not permit mind-reading, this is no reason to be complacent about the need to handle them with respect.